Consider this: A hostile actor creates a false headline, builds a story around it, and uses artificial intelligence (AI) to design an image that perfectly supports the erroneous narrative. Unsuspecting readers, affirmed by the seamless combination of text and imagery, share the manipulated information far and wide.
It’s no dystopian scenario – it may soon become plausible with advancements in text-to-image generation. Research in this domain is rapidly evolving past current technological limitations to allow for the manufacturing of high-quality, photorealistic images used to produce fake evidence.
Democracy Reporting International’s (DRI) new report dives deeper into the application of text-to-image generation. We go beyond existing media manipulation to focus on fully synthetic, AI-powered content – evaluating in the process global threat scenarios, emerging models, the credibility of news and possible solutions.
Join us on 29 September 2022 at 15:00 for an overview of the new report and an expert panel debate on how worried we should be about advancements in text-to-image generation, featuring:
- Claire Leibowicz, Head of AI & Media Integrity, Partnership on AI
- Shirin Anlen, Researcher and Media Technologist, WITNESS
- Andy Parsons, Senior Director, Adobe's Content Authenticity Initiative
- Jan Nicola Beyer, Research Coordinator, Democracy Reporting International
- Lena-Maria Böswald, Digital Democracy Programme Officer, DRI (moderator).
The report “What a Pixel Can Tell: Text-to-Image Generation and its Disinformation Potential” was written by DRI’s Lena-Maria Böswald and Beatriz Almeida Saab. It is part of our DisinfoRadar project, which aims to anticipate tomorrow’s disinformation toolkit to strengthen democratic societies’ preparedness for the challenges ahead.
DisinfoRadar is made possible with support from the German Federal Foreign Office.