BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Engineering Safe AI
SUMMARY:Amplification and dialogue as mechanisms for safe 
 advanced AI - Beth Barnes\, Computer Lab\, Univers
 ity of Cambridge
DTSTART;TZID=Europe/London:20180124T170000
DTEND;TZID=Europe/London:20180124T183000
UID:TALK99949AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/99949
DESCRIPTION:Slides: https://valuealignment.ml/talks/2018-01-24
 -amplification.pdf\n\nThese techniques come at the
  problem of safety from a fairly different angle t
 han the things we've discussed so far.\n\nAmplific
 ation is the idea of bootstrapping a trusted core 
 system\, increasing its capabilities while maintai
 ning safety properties. Paul Christiano and the Op
 enAI safety team have worked on these ideas. One c
 urrent suggestion for how to do this has a lot in 
 common with functional programming. For some more 
 discussion see e.g.  https://ai-alignment.com/alba
 -an-explicit-proposal-for-aligned-ai-17a55f60bbcf
LOCATION: Cambridge University Engineering Department\, CBL
  Seminar room BE4-38.  For directions see http://l
 earning.eng.cam.ac.uk/Public/Directions
CONTACT:Adrià Garriga Alonso
END:VEVENT
END:VCALENDAR
