University of Cambridge > Talks.cam > Language Technology Lab Seminars > Think Before you Speak: Next Gen LLMs with Global Reasoning and External Memory

Think Before you Speak: Next Gen LLMs with Global Reasoning and External Memory

Download to your calendar using vCal

If you have a question about this talk, please contact Lucas Resck.

The dominant paradigm in language modeling—scaling next-token prediction with parametric knowledge storage—delivers impressive capabilities but also fundamental limitations: brittle factual memory, inefficient parameters, and myopic reasoning. Progress requires a shift toward external memory and architectures that reason globally before committing to tokens.

In this talk I present two recent directions that support this claim. First, Limited-Memory Language Models externalize factual knowledge during pre-training, yielding models that are more controllable, verifiable, and parameter-efficient. Second, latent diffusion–augmented language models demonstrate how planning in continuous latent space overcomes the foresight limitations of next-token prediction, improving reasoning and coherence.

Bio: Kilian Q. Weinberger is a Professor of Computer Science at Cornell University. He received his Ph.D. in Machine Learning from the University of Pennsylvania and his undergraduate degree in Mathematics and Computing from the University of Oxford. Prior to Cornell, he served as an Associate Professor at Washington University in St. Louis and as a research scientist at Yahoo! Research. His research focuses on Machine Learning and its applications, specifically learning under resource constraints, metric learning, AI in Science, computer vision, autonomous vehicles, Gaussian Processes, and deep learning. Dr. Weinberger is an ACM and AAAI Fellow (2024) and a Blavatnik National Awards Finalist (2021).

This talk is part of the Language Technology Lab Seminars series.

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity