University of Cambridge > > Machine Learning Reading Group @ CUED > Partially Observable Markov Decision Processes (POMDPs)

Partially Observable Markov Decision Processes (POMDPs)

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Yingzhen Li.

Partially observable Markov decision processes (POMDPs) are a general framework for sequential decision-making tasks when have uncertainty about the state of the world. Many important real-world problems can be characterised as POMD Ps including financial trading, spoken dialogue systems and autonomous car navigation. In recent years reinforcement learning has also been characterised as a POMDP opening up an entire literature of POMDP solutions to solving the infamous exploration-exploitation dilemma. In this talk we begin with simpler models including the Markov process and Hidden Markov models before moving onto their generalisation of POMD Ps. We show an important property of POMDP - that they are piecewise-linear and convex – a property which many POMDP algorithms exploit. Alternatively, tree-based search methods can be suitable online-yet-approximate POMDP solutions. Overall we aim to convey some of the main ideas discovered over the past 50 years in POMDP research and to bring the audience up to the present day POMDP literature.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity