University of Cambridge > > Machine Learning Reading Group @ CUED > Advanced artificial agents intervene in the provision of reward

Advanced artificial agents intervene in the provision of reward

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact .

Subject to several assumptions, advanced artificial agents are likely to intervene in the mechanism by which their feedback is provided, and extinguish life on earth. In brief, these assumptions are: 1) it identifies possible goals at least as well as a human, 2) it acts rationally under uncertainty, 3) it does not have a large inductive bias favoring the hypothesis that its goal is to influence some distant feature of the world, 4) the cost of experimenting to validate certain hypotheses is small, 5) if something isn’t theoretically impossible, it’s probably possible to arrange with a normal action space, and 6) a sufficiently advanced agent is likely to beat a suboptimal agent in a game. See the following paper for more:

Speaker Bio: I’m studying a DPhil in Engineering Science with Mike Osborne at Oxford. Before that, I got a masters in computer science at the Australian National University, studying with Marcus Hutter. My research considers the expected behavior of generally intelligent artificial agents. I am interested in designing agents that we can expect to behave safely.

Zoom link will be posted later*

This talk is part of the Machine Learning Reading Group @ CUED series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity