University of Cambridge > Talks.cam > NLIP Seminar Series > Deep consequences: Why syntax (as we know it) isn't a thing, and other (shocking?) conclusions from modelling language with neural nets.

Deep consequences: Why syntax (as we know it) isn't a thing, and other (shocking?) conclusions from modelling language with neural nets.

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Tamara Polajnar.

With the development of ‘deeper’ models of language processing, we can start to infer (in an more empirically sound way) the true principles, factors or structures that underline language. This is because, unlike many other approaches in NLP , deep language models (loosely) reflect the true situation in which humans learn language. Neural language models learn the meaning of words and phrases concurrently with how best to group and combine these meanings, and they are trained to use this knowledge to do something human language users easily do. Such models beat established alternatives at various tasks that humans find easy but machines traditionally find hard. In this talk, I present the results of recent experiments using deep neural nets to model language. This includes the latest results from a recent paper Learning to Understand Phrases by Embedding the Dictionary, in which we apply a recurrent net with long-short-term-memory to a general-knowledge question-answering task. I conclude by discussing the potential implications of all of this for both language science and engineering.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity