![]() |
University of Cambridge > Talks.cam > Language Technology Lab Seminars > Just asking questions (MIT)
Just asking questions (MIT)Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Lucas Resck. Abstract: In the age of deep networks, “learning” almost invariably means “learning from examples”. We train language models with human-generated text and labeled preference pairs, image classifiers with large datasets of images, and robot policies with rollouts or demonstrations. When human learners acquire new concepts and skills, we often do so with richer supervision, especially in the form of language—-we learn new concepts from examples accompanied by descriptions or definitions, and new skills from demonstrations accompanied by instructions. Crucially, language-based supervision involves not only instructions but questions—-students ask questions to elicit the most useful pieces of supervision, and teachers ask questions to probe student knowledge and encourage them to acquire new skills or aspects of understanding on their own. This talk will focus on a few recent projects focused on building computational models that can ask good questions for both learning and teaching, with applications spanning LM alignment, policy learning, and education. This is joint work with Belinda Li, Alex Tamkin, Noah Goodman, Andi Peng, Ilia Sucholutsky, Nishanth Kumar, Julie A Shah, Andreea Bobu, Alexis Ross, Gabe Grand, Valerio Pepe and Josh Tenenbaum. Bio: Jacob Andreas is an associate professor at MIT in EECS and CSAIL . He completed his PhD at Berkeley, where he was a member of the Berkeley NLP Group and the Berkeley AI Research Lab. He has also spent time with the Cambridge NLIP Group, the NLP Group, and the former Center for Computational Learning Systems at Columbia. His research focuses on language as a communicative and computational tool. He studies how people learn to understand and generate novel utterances with remarkably little data and how language facilitates the acquisition of new concepts and reasoning. Recognizing that current machine learning techniques fall short of human abilities in language understanding and learning from language, his work aims to uncover the computational foundations of language learning and develop general-purpose intelligent systems capable of effective human communication and learning from human guidance. This talk is part of the Language Technology Lab Seminars series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsproblogs Cambridge University Raja Yoga Meditation Society Student Community ActionOther talksHistory of the Little Wilbraham River Kirk Public Lecture: Title TBC Dwarf Galaxies in the Local Universe Local Clustering Decoder: A Fast and Adaptive Hardware Decoder for the Surface Code Outside the brain: how glial cells orchestrate tissue immunity Novel antimicrobial peptides in Teladorsagia circumcincta excretory-secretory products: new leads in helminth–microbiota interactions |