University of Cambridge > > Machine Learning @ CUED > Game Playing Meets Game Theory: Strategic Learning from Simulated Play

Game Playing Meets Game Theory: Strategic Learning from Simulated Play

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Adrian Weller.

Recent breakthroughs in AI game-playing—AlphaGo (Go), AlphaZero (Chess, Shogi +), AlphaStar (StarCraft II), Libratus and DeepStack (Poker)—have demonstrated superhuman performance in a range of recreational strategy games. Extending beyond artificial domains presents several challenges, but the basic idea of learning from simulated play employed in most of these systems is broadly applicable to any domain that can be accurately simulated. This thread of work naturally dovetails with methods developed in the Strategic Reasoning Group at Michigan for reasoning about simulation-based games. I will recap some of this work, with emphasis on how new advances in deep reinforcement learning can contribute to a major broadening of the scope of game-theoretic reasoning for complex multiagent domains.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity