Talks.cam will close on 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Learning in Continuous-Time Linear-Quadratic Games with Heterogeneous Players

Learning in Continuous-Time Linear-Quadratic Games with Heterogeneous Players

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

SCLW01 - Bridging Stochastic Control And Reinforcement Learning: Theories and Applications

Multi-agent reinforcement learning, despite its popularity and empirical success, faces significant scalability challenges in large-population dynamic games, especially with heterogeneous players. This talk will use fundamental linear-quadratic games as an example and present recent frameworks that provide principled designs for efficient and scalable learning algorithms in multi-agent systems with heterogeneous players. In the first part, we will introduce the Graphon Mean Field Game approach and present provably convergent policy gradient algorithms for large-population games in which agents interact weakly through a symmetric graph. The second part of the talk will focus on the Alpha-Potential Game framework, which enables the development of efficient learning algorithms for asymmetric network games that go beyond mean-field approximations. This talk is based on joint work with Yufei Zhang.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity