COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
Goals vs Utility FunctionsAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Adrià Garriga Alonso. This week we read the “Goals vs Utility Functions” series by Rohin Shah. This is the posts under “Ambitious Value Learning” at https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc . You should read them before the session. Optional reading: “Coherent Behaviour in the real world is an incoherent concept” by Richard Ngo. These writings focus on the arguments that usually justify the premise that a general AI will necessarily optimize a long-term, explicit, simple goal. The authors find them to be insufficient, and in the end propose that perhaps a better approach to AGI safety is to construct agents without long-term goals of this kind. As usual, there will be free pizza. The first half hour is for stragglers to finish reading. Invite your friends to join the mailing list (https://lists.cam.ac.uk/mailman/listinfo/eng-safe-ai), the Facebook group (https://www.facebook.com/groups/1070763633063871) or the talks.cam page (https://talks.cam.ac.uk/show/index/80932). Details about the next meeting, the week’s topic and other events will be advertised in these places. This talk is part of the Engineering Safe AI series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsResearch Seminars - Department of Biochemistry 2008/09 Summer Hebrew Ulpan Physical ChemistryOther talksMaterials for Regenerative Medicine Just a little out of the ordinary Struggles over slavery, struggles over power: Africa 1926-1946 COMPUTER VISION: GEOMETRY, UNCERTAINTY AND MACHINE LEARNING Lenten fasting and Easter feasting: The food of Easter in early modern Europe Unveiling the role of dust on planetary migration |