CESS Researcher, Andrew Lewis, and co-authors have their work on Deepfake technology reported on by The Independent.
They show that 1) people are not good at discerning quality deepfakes from genuine videos, and 2) even when warned 1 of the 5 videos they’ll see has been altered, the vast majority (78%) of people still cannot detect the deepfake. As summarised in the paper: “These experiments show that human discernment is largely inadequate in detecting deepfakes, even when participants are directly warned that the content they view may have been altered.” A practical interpretation of Experiment 2 is that — unlike how accuracy prompts and other interventions can help individuals better spot textual misinformation — warning labels do not enable individuals to simply look closer and see the irregularities on their own. As such, successful content warnings on deepfakes will rely on trust in moderators’ judgments, raising concerns that any such warnings may be written off as politically motivated or biased.”
To learn more about this project — a collaboration between CESS and the Royal Society — please access the working paper here: Do Content Warnings Help People Spot a Deepfake? Evidence from Two Experiments.