Talks.cam will close on 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Representation-based Reinforcement Learning and Control for Dynamical Systems

Representation-based Reinforcement Learning and Control for Dynamical Systems

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

SCLW01 - Bridging Stochastic Control And Reinforcement Learning: Theories and Applications

The explosive growth of machine learning and data-driven methodologies have revolutionized numerous fields. Yet, the translation of these successes to the domain of dynamical physical systems remains a significant challenge. Closing the loop from data to actions in these systems faces many difficulties, stemming from the need for sample efficiency and computational feasibility, along with many other requirement such as verifiability, robustness, and safety. In this talk, we present a framework that bridges this gap by introducing novel representations for developing nonlinear stochastic control and reinforcement learning algorithms. Our approach enables efficient, safe, robust, and scalable decisionmaking with provable guarantees. We further demonstrate how these representations help close the simto-real gap, enhance data efficiency in imitation learning, and enable scalable computation of localized policies for large-scale nonlinear networked systems. Lastly, we will briefly present our latest work on using diffusion models to represent control policies and how to online train diffusion policies, along with their applications to manipulation tasks.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity