University of Cambridge > > Engineering Safe AI > Last term summary + discussion of topic importance

Last term summary + discussion of topic importance

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact AdriĆ  Garriga Alonso.

How can we design AI systems that reliably act according to the true intent of their users, even as the capability of the systems increases? The value alignment problem is becoming more and more important as the capabilities of AI systems increase.

I’m going to present a summary of the topics we covered last term, and where they fall in the landscape of AI safety research so far. So, if you are new, it is a great time to start coming! After that, in small groups, we will talk about which of these topics could be more interesting or important than others, and what should we cover in the other sessions of the term.

Don’t worry, the organisers have more session topics than sessions, but we want to know what other people would like to do.

We hope to see you there!


Topic mind map:

This talk is part of the Engineering Safe AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity