University of Cambridge > Talks.cam > Foundation AI > Beyond Interpolation: Extrapolative Reasoning with Reinforcement Learning and Graph Neural Networks

Beyond Interpolation: Extrapolative Reasoning with Reinforcement Learning and Graph Neural Networks

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Pietro Lio.

Despite incredible progress, many neural architectures fail to properly generalize beyond their training distribution. As such, learning to reason in a correct and generalizable way is one of the current fundamental challenges in machine learning. In this respect, logic puzzles provide a great testbed, as we can fully understand and control the learning environment. Thus, they allow to evaluate performance on previously unseen, larger and more difficult puzzles that follow the same underlying rules. Since traditional approaches often struggle to represent such scalable logical structures, we propose to model these puzzles using a graph-based approach. Then, we investigate the key factors enabling the proposed models to learn generalizable solutions in a reinforcement learning setting. Our study focuses on the impact of the inductive bias of the architecture, different reward systems and the role of recurrent modeling in enabling sequential reasoning. Through extensive experiments, we demonstrate how these elements contribute to successful extrapolation on increasingly complex insights and frameworks offer a systematic way to design learning-based systems capable of generalizable reasoning beyond interpolation.

google meet link: https://meet.google.com/vmn-iwhu-tas

This talk is part of the Foundation AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity