University of Cambridge > > Engineering Safe AI > Comprehensive AI Services

Comprehensive AI Services

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact AdriĆ  Garriga Alonso.

A common critique of the motivation for AI safety is that it makes many assumptions that aren’t proven, or even likely to be true according to some. The most glaring one is that a general AI (AGI) will take the form of a recursively self-improving agent that tries to optimise for a long-term goal.

“Comprehensive AI services” (CAIS) is an answer to this critique by Eric Drexler from the Future of Humanity institute. He provides an alternative model for an AGI , which is that it emerges as a collection of AI-based software services of bounded scope and time to act. If this is a very likely scenario for the emergence of AGI , the priorities of current AI safety research should change.

We will discuss the CAIS model and its implications for safety research.

Reading list (as usual, we start reading at 5 pm, but the discussion starts at 5:30 pm)

- Rohin Shah’s summary of CAIS

- Richard Ngo’s summary of CAIS

- Very long, I suggest only reading a few choice sections. It’d be good to write down which sections you read, so we know which to ask you for a summary of and which to discuss together. Eric Drexler’s technical report: Reframing superintelligence: Comprehensive AI services as general intelligence

This talk is part of the Engineering Safe AI series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2023, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity