| COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. | ![]() |
University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Tutorial: Generalization in Reinforcement Learning: From Foundations to New Frontiers
Tutorial: Generalization in Reinforcement Learning: From Foundations to New FrontiersAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact nobody. SCL - Bridging Stochastic Control And Reinforcement Learning Reinforcement learning (RL) and optimal control share a deep intellectual heritage in addressing sequential decision-making under uncertainty. This tutorial develops a computer scientist’s perspective on RL theory—one that places generalization, sample efficiency, and computational tractability at the center of the analysis. A particular focus will be on the stylized setting of linear function approximation, which offers the best prospects for developing and understanding tractable algorithms. The tutorial will illustrate how this perspective shapes problem formulations, abstractions, and algorithmic insights through several representative results. It will conclude by considering how similar ideas might inform reasoning and planning in large language models, raising more questions than answers. The tutorial follows the new MIT Press textbook “Multi-Agent Reinforcement Learning: Foundations and Modern Approaches”, available at www.marl-book.com. This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsCamBRAIN talks Nineteenth-Century Epic welcome homeOther talksBSU Seminar: "Upgrading survival models with CARE" Directors Briefing Introduction to topological quantum field theory (with defects) - Lecture 2 Welcome Cambridge Lactation Network meeting: Summer symposium Get started! & Writing about anything for anyone |