Deepfakes: What Should We Fear, What Can We Do
Deepfakes! As more sophisticated, more personalized, more convincing audio and video manipulation emerges how do we get beyond the apocalyptic discussion of the "end of trust in images and audio" and instead focus on what we can do about malicious deepfakes and other AI-manipulated synthetic media. Based on WITNESS’ collaborations with technologists, journalists and human rights activists, we’ll explore the state-of-the-art usage of deepfakes and other ‘synthetic media’, the solutions available to fight these malicious uses and where this goes next. Linked to broader trends in challenges to public trust, disinformation, and the evolving information ecosystem globally how should we plan together to fight the dark side of a faked video and audio future?
Share this idea
Additional Supporting Materials
- What are the different ways deepfakes and synthetic media are created and being deployed right now for malicious uses?
- What is the status of different types of manual and forensic solutions to identifying deepfakes and synthetic media?
- What are the range of technical, platform, policy and journalism approaches to address the threat of malicious uses of deepfakes and synthetic media?
- Sam Gregory, Program Director, WITNESS
Sam Gregory, Program Director, WITNESS