Getting the car up the mountain - Bayesian Reinforcement Learning
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact jcattin.
Almost everyone is aware that computers can drive cars, control robots, and beat world-class players in chess. In this talk, we discuss how this works by looking at a simple toy example. We further develop a statistical model which allows quantifying uncertainty in these control settings. The advantage of using the statistical model is that we can not only say which action is optimal (should the car accelerate or slow down in a given situation?) but also state how certain we are that a human controller would take the same action. The talk is roughly based on Sections 4 and 5 of the following preprint: https://arxiv.org/abs/2012.10943 , but I will largely omit the mathematical technicalities.
This talk is part of the Darwin College Science Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|