![]() |
COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Grammar, reasoning, learning: Three short stories on comparative & rational analysis of language model capabilities
![]() Grammar, reasoning, learning: Three short stories on comparative & rational analysis of language model capabilitiesAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact shun shao. Abstract: There has been substantial debate about the capabilities of language models—which aspects of language they can acquire, whether they can be said to ‘reason’, and whether they can truly ‘learn’ in context. In this talk, I will suggest that approaches from cognitive science can provide useful tools for addressing these questions. Specifically, I will focus on comparative methods (comparing capabilities across different systems) and rational analysis (analyzing behavior as a rational adaptation to an environment). I will illustrate different aspects of these ideas through three examples from our recent work: 1) comparing processing of recursive syntactic structures in language models and humans, 2) evaluating the way that both language models and humans entangle content in their responses to logical reasoning problems, and 3) understanding how in-context learning emerges from properties of training distributions. I will outline how these disparate phenomena can be understood using these cognitive methods, and the implications for evaluating and understanding language model behaviors. Bio: Andrew Lampinen’s research bridges cognitive science and artificial intelligence, often with a focus on how the complex behaviors and representations of models, agents, or humans emerge from their learning experiences or data. His work covers topics ranging from interpretability, to explanations as a learning signal, to embodied intelligence. He is currently a Staff Research Scientist at Google DeepMind. Before that, he completed his PhD in cognitive psychology at Stanford University, and his BA in mathematics and physics at UC Berkeley. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsWizkid 2020 Audio and Music Processing (AMP) Reading GroupOther talksRiverlane: Building an error corrected quantum computer, and the computer science challenges that come up along the way 100 years of educational trials – no significant difference? Enhancing Diversity in Clinical Trials: A Touch of Humanity Fred Simmons: "Abiogenetic Bias, Axiology, and Teleology." 'Converging Forces: How Rare Variants, Common Variants, and Environmental Factors Shape Immunological Responses' Resource discussion and development, stimulated by Tucker talk |