Talks.cam will close on 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Tutorial: Generalization in Reinforcement Learning: From Foundations to New Frontiers

Tutorial: Generalization in Reinforcement Learning: From Foundations to New Frontiers

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

SCL - Bridging Stochastic Control And Reinforcement Learning

Reinforcement learning (RL) and optimal control share a deep intellectual heritage in addressing sequential decision-making under uncertainty. This tutorial develops a computer scientist’s perspective on RL theory—one that places generalization, sample efficiency, and computational tractability at the center of the analysis. A particular focus will be on the stylized setting of linear function approximation, which offers the best prospects for developing and understanding tractable algorithms. The tutorial will illustrate how this perspective shapes problem formulations, abstractions, and algorithmic insights through several representative results. It will conclude by considering how similar ideas might inform reasoning and planning in large language models, raising more questions than answers.   The tutorial follows the new MIT Press textbook “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches”, available at www.marl-book.com.  

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity