University of Cambridge > > NLIP Seminar Series > Learning, Representing, and Understanding Language

Learning, Representing, and Understanding Language

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Andrew Caines.

Language is one of the greatest puzzles of both human and artificial intelligence (AI). Human children learn and understand their language effortlessly; yet, we do not fully understand how they do so. Moreover, although access to more data and computation has resulted in recent advances in AI systems, they are still far from human performance in many language tasks. In my research, I try to address two broad questions: how do humans learn, represent, and understand language? And how can this inform AI?

In the first part of my talk, I show how computational modeling can help us understand the mechanisms underlying child word learning. I introduce an unsupervised model that learns word meanings using general cognitive mechanisms; the model processes data that approximates child input and assumes no built-in linguistic knowledge. Next, I explain how cognitive science of language can help us examine current AI models and develop improved ones. In particular, I focus on how investigating human semantic processing helps us model semantic representations more accurately. Finally, I explain how we can use experiments in theory-of-mind to examine question-answering models with respect to reasoning capacity about beliefs.

This talk is part of the NLIP Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity