Talks.cam will close on 1 July 2026, further information is available on the UIS Help Site
 

University of Cambridge > Talks.cam > Zangwill Club > Trust in “Moral” Machines

Trust in “Moral” Machines

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Psychology Reception.

As use of artificial intelligence (AI) becomes more widespread, machine systems are increasingly required not only to display artificial intelligence but artificial morality too. AI is already used to aid decisions about life support, criminal sentencing, and the allocation of scarce medical resource, and so-called “moral machines” are even being thought to be able to act as “artificial moral advisors” by giving moral advice and helping to improve human moral decision making. In this talk, I will explore what it means to trust AI in the moral domain. Drawing on insights from social psychology and moral cognition, I will discuss how people conceptualise trust in AI, how judgments of effectiveness and ethicality intertwine, and how perceptions of intelligence shape attributions of morality. I will consider how people trust “artificial moral advisors,” and how people trust other humans who rely on AI for socio-relational tasks. Drawing on these findings, I will ask whether – and in what sense – we should place trust in ‘moral’ machines, and what kind of future we are willing to accept as AI takes on roles that shape not only our decisions, but our relationships, values, and humanity itself.

Host: Prof Simone Schnall (ss877@cam.ac.uk)

This talk will be recorded and uploaded to the Zangwill Club Youtube channel in due course.

This talk is part of the Zangwill Club series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity