COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > NLIP Seminar Series > Grounded language learning in simulated worlds
Grounded language learning in simulated worldsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Anita Verő. Developing systems that can execute symbolic, language-like instructions in the physical world is a long-standing challenge for Artificial Intelligence. Previous attempts to replicate human-like grounded language understanding involved hard-coding linguistic and physical principles, which is notoriously laborious and difficult to scale. Here we show that a simple neural-network based agent without any hard-coded knowledge can exploit general-purpose learning algorithms to infer the meaning of sequential symbolic instructions as they pertain to a simulated 3D world. Beginning with no prior knowledge, the agents learn the meaning of concrete nouns, adjectives, more abstract relational predicates and longer, order-dependent, sequences of symbols. The agent naturally generalises predicates to unfamiliar objects, and can interpret word combinations (phrases) that it has never seen before. Moreover, while its initial learning is slow, the speed at which it acquires new words accelerates as a function of how much it already knows. These observations suggest that the approach may ultimately scale to a wider range of natural language, which may bring us towards machines capable of learning language via interaction with human users in the real world. The techniques applied in this work will be covered in the course Deep Learning for NLP taught next term in the CL. https://www.cl.cam.ac.uk/teaching/1718/R228/. Bio: Felix is a Research Scientist at Deepmind. He did his PhD at the University of Cambridge with Anna Korhonen, working on unsupervised language and representation learning with neural nets. As well as Anna, he collaborated with (and learned a lot from) Yoshua Bengio, Kyunghyun Cho and Jason Weston. As well as developing computational models that can understand language, he is interested in using models to better understand how people understand language, and is currently doing both at Deepmind. This talk is part of the NLIP Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsBiophysical Seminar The Paykel Lectures Horizon ForumOther talksCANCELLED: Beverly Gage: G-Man: J. Edgar Hoover and the American Century Girton College 57th Founders’ Memorial Lecture with Hisham Matar: Life and Work Sir Richard Stone Annual Lecture: The Emergence of Weak, Despotic and Inclusive States Develop a tool for inferring symptoms from prescriptions histories for cancer patients The clinical and biological basis of prostate cancer - from diagnosis to personalised therapy Statistical analysis of biotherapeutic datasets to facilitate early ‘Critical Quality Attribute’ characterization. |