University of Cambridge > Talks.cam > Machine Learning Reading Group @ CUED > State Space Abstraction for Reinforcement Learning

State Space Abstraction for Reinforcement Learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact mv310.

Reinforcement learning (RL) is a method of solving sequential decision tasks in Markov decision process frameworks with unknown parameters. Unfortunately the computational complexity of RL hinders application to many real world problems. Function approximation techniques are a common method to compute approximate solutions fast. An alternative technique is state space abstraction, discarding irrelevant state information. Abstracted state spaces speed up learning exponentially w.r.t. state dimensionality. An extra benefit is discovery of powerful generalisations in the original state space. This talk will provide a review of state space abstraction. We introduce different types of abstractions and their consequences to the solution accuracy. We will also discuss predictive state representations: a compact way to model dynamical systems using predictions of observable quantities.

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2019 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity