AI-Powered Media Manipulation and Its Consequences
Pics… and it didn’t happen. Artificial intelligence and machine learning technologies are vastly increasing the scope of media manipulation. Technology now enables fabricated or altered media (manipulation by editing) and distorts what media people can access (manipulation by curation). This panel will cover two case studies: an AI that does rapid, automated “doctoring” of images and media recommendation algorithms that amplify or suppress content. Panelists will debate the society-wide dangers of fake and misleading media to democracy, privacy, cultural production, and more. They’ll discuss current efforts to raise awareness as well as possible legal remedies for the development and use of AIs that generate fraudulent, defamatory, or otherwise unlawful content.
Share this idea
Additional Supporting Materials
- Due to rapid technological development, it will soon be impossible to rely on the authenticity of audio, video, and images online.
- Automated media curation is already widespread. These systems are being strategically and efficiently deployed to promote or bury certain media.
- People impacted by AI media manipulation will be able to find recourse through existing legal structures--with some creative application of the law.
- Mason Kortz, Clinical Instructional Fellow, Berkman Klein Center for Internet & Society, Cyberlaw Clinic
- Joan Donovan, Media Manipulation and Platform Accountability Research Lead, Data & Society
- Jessica Fjeld, Clinical Instructor and Acting Assistant Director, Berkman Klein Center for Internet & Society, Cyberlaw Clinic
- Matt Groh, Graduate Student, MIT Media Lab
Mason Kortz, Clinical Instructional Fellow, Berkman Klein Center for Internet & Society, Cyberlaw Clinic