Deep fake technology isn’t inherently harmful. The underlying technology has benign uses, from the frivolous apps that let you swap faces with celebrities to significant deep learning algorithms (the technology that underpins deep fakes) that have been used to synthesise new pharmaceutical compounds and protect wildlife from poachers.
However, ready access to deep fake technology also allows cybercriminals, political activists and nation-states to quickly create cheap, realistic forgeries. This technology lowers the costs of engaging in information warfare at scale and broadens the range of actors able to engage in it. Deep fakes will pose the most risk when combined with other technologies and social trends: they’ll enhance cyberattacks, accelerate the spread of propaganda and disinformation online and exacerbate declining trust in democratic institutions.
Any technology that can be used to generate false or misleading content, from photocopiers and Photoshop software to deep fakes, can be weaponised. This paper argues that policymakers face a narrowing window of opportunity to minimise the consequences of weaponised deep fakes. Any response must include measures across three lines of effort:
- investment in and deployment of deep fake detection technologies
- changing online behaviour, including via policy measures that empower digital audiences to critically engage with content and that bolster trusted communication channels.
- creation and enforcement of digital authentication standards