University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > Probabalistic Inference for solving (PO)MDPs

Probabalistic Inference for solving (PO)MDPs

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Shakir Mohamed.

Recent work has shown that finding a near-optimal policy in an MDP or POMDP can be framed as an inference problem. We will focus on:

“Probabilistic inference for solving (PO)MDPs,” Toussaint et. al. (2006)

and perhaps talk a little bit about:

“Hierarchical POMDP Controller Optimization by Likelihood Maximization,” Toussaint et. al. (2008)

and discuss implications for planning and reinforcement learning.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity