University of Cambridge > > Machine Learning @ CUED > Stochastic control as an inference problem

Stochastic control as an inference problem

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Zoubin Ghahramani.

Stochastic optimal control theory deals with the problem to compute an optimal set of actions to attain some future goal. Examples are found in many contexts such as motor control tasks for robotics, planning and scheduling tasks or managing a financial portfolio. The computation of the optimal control is typically very difficult due to the size of the state space and the stochastic nature of the problem.

We introduce a class of stochastic optimal control problems that can be mapped onto a probabilistic inference problem. This duality between control and inference is well-known. The novel aspect of the present formulation is that the optimal solution is given by the minimum of a free energy and the link to graphical model inference. We can thus apply principled approximations such as the belief propagation or the Cluster Variation method to obtain efficient approximations. We will illustrate the method for the task stacking blocks. If time permits we will discuss distributed (agent) solutions and comment on the partial observable case.

This talk is part of the Machine Learning @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity