Human and Robot Decision Making in Multi-Armed Bandits
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Alberto Padoan.
Decision-making in explore–exploit tasks, from resource allocation to search in an uncertain environment, can be modeled using multi-armed bandit (MAB) problems, where the decision-maker must choose sequentially in time among multiple options with uncertain rewards. Rigorous examination of the heuristics that humans use in these tasks can help in designing and evaluating strategies for performance in a wide range of decision-making scenarios that involve humans, robots, or both. I will discuss results from multi-armed bandit experiments with human participants and features of human decision-making captured by a model that relies on Bayesian inference, confidence bounds, and Boltzmann action selection. I will present extensions to satisfying objectives and to distributed cooperative decision making in multi-player multi-armed bandit problems in which agents communicate according to a network graph. I will show demonstrations of robots implementing the algorithms to search for peaks over an uncertain distributed resource field.
This talk is part of the CUED Control Group Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|