University of Cambridge > > Machine Learning Journal Club > Reinforcement Learning in continuous state-spaces

Reinforcement Learning in continuous state-spaces

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Philip Sterne.

For this journal club I am interested in continuous state-space reinforcement learning. It is my gut feeling that if one makes enough simplifying assumptions then the problem becomes solvable (if computational concerns are ignored).

It would be great if everyone attending could spend some time thinking about which assumptions would make the solution easier (yet still interesting) and what form the various (intractable) integrals would have.

If you believe that Gaussian Processes might provide a good starting point for your thoughts, have a look at the following papers (though they are not essential if you have your own ideas)

There is also a relevant video lecture by Engel available here

This talk is part of the Machine Learning Journal Club series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity