University of Cambridge > Talks.cam > Computational Neuroscience > When is a state a belief state?

When is a state a belief state?

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Daniel Kornai.

This talk has been canceled/deleted

When making predictions under uncertainty, neural networks often exhibit behaviors reminiscent of probabilistic inference—even without explicit inductive biases. In reinforcement learning (RL), uncertainty about latent variables is naturally captured by Partially Observable Markov Decision Processes (POMDPs). These models apply broadly, from games like poker to domains such as healthcare and autonomous driving.

In POMD Ps, the optimal agent performs sequential Bayesian filtering under a generative model to infer the latent state of the environment—maintaining a “belief” over that state. Surprisingly, naively trained recurrent neural networks (RNNs) can outperform dedicated probabilistic inference methods tailored to POMD Ps. Recent work has shown that RNNs trained to learn value functions can develop belief-like representations without access to the generative model—though lacking theoretical justification for this phenomenon.

In this talk, I will connect these observations by presenting theoretical arguments for when and why belief-like representations emerge in deep RL agents. I will also demonstrate how such theoretical insights can inform and justify auxiliary loss objectives used in state-of-the-art architectures, such as DreamerV3.

This talk is part of the Computational Neuroscience series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity