| COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Reading Between the Lines: Using Language Models to Amplify Human Data in Robot Learning (MIT)
Reading Between the Lines: Using Language Models to Amplify Human Data in Robot Learning (MIT)Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Lucas Resck. Human-in-the-loop robot learning faces a fundamental data challenge that general machine learning doesn’t: unlike settings where we can collect massive offline datasets, robots must learn from limited, real-time human interactions. This creates a critical bottleneck: we need methods that can make the most of limited human input, or, in other words, that can learn a lot from a little. The key insight in this talk is that large language models, having been trained on vast amounts of human data, already possess the common sense and semantic priors we need to fill in these gaps. When someone demonstrates a task or gives feedback, there’s often implicit information that seems obvious to humans but that robots overlook completely. I discuss three approaches that use language models to “read between the lines” of human input. I demonstrate how LLMs can take sparse human labels and enable robots to generalize to complex expressions, extract hidden preferences that are implied by human behavior but not explicitly stated, and identify missing task concepts based on the situational context of human input. By strategically combining minimal human input with the rich prior knowledge embedded in language models, we can achieve the kind of sample-efficient learning that human-in-the-loop robotics demands for real-world deployment. Bio: Andreea Bobu is an Assistant Professor at MIT in AeroAstro and CSAIL . She leads the Collaborative Learning and Autonomy Research Lab (CLEAR Lab), where they develop autonomous agents that learn to do tasks for, with, and around people. Her goal is to ensure that these agents’ behavior is consistent with human expectations, whether they interact with expert designers or novice users. She obtained her Ph.D. in Electrical Engineering and Computer Science at UC Berkeley with Anca Dragan in 2023. Prior to her Ph.D. she earned her Bachelor’s degree in Computer Science and Engineering from MIT in 2017. She was the recipient of the Apple AI/ML Ph.D. fellowship, is a Rising Star in EECS and an R:SS and HRI Pioneer, and has won best paper award at HRI 2020 and the Emerging Research Award at the International Symposium on the Mathematics of Neuroscience 2023. Before MIT , she was also a Research Scientist at the AI Institute and an intern at NVIDIA in the Robotics Lab. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsType the title of a new list here Mental Health, Religion & Culture Dio-Gandhi equationsOther talksPharmacology Seminar Series: David Hodson, Interrogating GPCRs from the Single Molecule to the Whole Animal Free souls, poor soils: the nature of slave emancipation in postcolonial Uruguay Shining a light on the origin of supermassive black holes with cosmological simulations UK Biobank’s cardiovascular magnetic resonance study: Opportunities and lessons learnt Communicating about AI Centre of Governance & Human Rights (CGHR) Welcome Drinks and Book Talk |