University of Cambridge > > CamPoS (Cambridge Philosophy of Science) seminar > Ways of worldfaking

Ways of worldfaking

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Jacob Stegenga.


Deepfakes, namely, algorithmically created realistic images and ā€ˇvideos that make it appear as if people did something they didn’t, undermine our fundamental epistemic standards and practices. Yet the nature of the epistemic threat they pose remains elusive. After all, fictional or distorted representations of reality are as old as cinema. Existing accounts of technology as extending the senses (Humphreys 2004), mediating between subjects and the world (Verbeek 2011), or translating between actants (Latour 2005) cannot characterize this threat. Existing concrete accounts of the threat of deepfakes by social epistemologists such as Regina Rini (2020) and Don Fallis (2020) fall short of their target.

Employing the notions of artifact affordance and technological possibility (Record 2013; Davis 2020), I argue that the epistemic threat of deepfakes (and CGI more generally) is that for the first time they afford ordinary computer users the practicable possibility to fairly cheaply and effortlessly make fictional worlds indistinguishable from the real world. Normatively, a deepfake is epistemically malignant when (1) a reasonable person is misled to believe that the fictional world is the actual world; (2) she forms beliefs about the actual world about issues that are morally or epistemically important. For example, a satirical deepfake of Queen Elizabeth dancing to hip-hop song is benign because a reasonable person understands this is fiction. But a deepfake of a misogynic speech by Obama is malignant because it misleads a reasonable person about Obama’s views of women. I illustrate how this analysis generalizes to other case studies, such as a Photoshop makeover, or a QAnon discussion group.

This talk is part of the CamPoS (Cambridge Philosophy of Science) seminar series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.


© 2006-2024, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity