Decoding Disinformation: What AI Teaches Us About Trust
On June 5th and October 2nd, two editions of the workshop AI & Disinformation were hosted at Utrecht University, in collaboration with AI & Media lab and as part of the AI & society course led by Prof Payal Arora. The sessions brought together students, teachers, and researchers to explore how generative artificial intelligence (AI) is reshaping the way information is created, shared, and trusted. These interactive sessions provided a space for reflection and debate, inviting participants to examine how AI can both empower and endanger our information ecosystems.
The workshop began with a discussion on agency and responsibility in shaping AI technologies. Some participants called for collective and systemic approaches to regulation, while others emphasised the role of individual experimentation in driving ethical innovation.
Today, online information spaces are increasingly shaped by engagement-driven algorithms. Social media platforms tend to boost sensational or emotionally charged posts through engagement bait tactics that prioritise visibility over accuracy. This dynamic contributes to what has been described as “enshittification,” a gradual decline in the quality of online services as profit motives overshadow user experience and public interest.
AI-supported disinformation as a global risk
The Global Risks Report 2025 identified AI-supported disinformation as one of the most significant threats to global stability.
Other studies echo this warning, outlining how AI-generated content can be used to manipulate narratives, sow confusion, influence public opinion, deepen social divisions, and erode trust in journalism and institutions.
Even before the widespread adoption of generative AI following the “ChatGPT moment” in 2022, experts warned of an “information apocalypse” and of AI as a potential “weapon of mass disruption.” Several qualities make generative AI particularly effective for spreading disinformation. It allows false content to be produced and distributed faster, at lower cost, with higher levels of personalisation, and at greater scale than ever before. The automation of image, text, and video generation means that misleading narratives can be adapted to specific audiences, making them appear more persuasive and authentic.
During the workshop, participants took part in an image recognition challenge, where they were asked to identify whether visuals were real or AI-generated. Results were mixed, underscoring how convincing synthetic media has become. Research supports this difficulty: only a very small fraction of users can reliably distinguish between authentic and AI-generated content. The consequences of this confusion are wide-ranging. Disinformation corrodes public confidence in truth itself, leading to cognitive overload and disengagement.
The power of persuasion: AI and public opinion
A recent Reddit-based experiment raised new concerns about the persuasive capacity of AI. Conducted on the subreddit r/ChangeMyView, where users debate various social and political issues, the study found that personalised AI-generated comments were rated as more convincing and realistic than those written by humans. This result suggests that AI could play a significant role in shaping public debate, particularly during times of crisis or elections.
These findings raise questions about what happens when AI is used strategically by malicious actors to push specific narratives or manipulate discourse at scale. The ability of AI to produce highly tailored, emotionally resonant content makes it a powerful tool in influencing opinions and spreading coordinated disinformation.
AI, creativity, and authenticity
Beyond its political and social implications, AI is also challenging traditional ideas about creativity and authenticity. This was illustrated by the case of artist Boris Eldagsen at the Sony World Photography Awards. After winning the competition, Eldagsen revealed that his image had been created entirely with the AI system DALL·E 2 and declined the prize. His decision prompted widespread discussion about authorship, originality, and artistic integrity in the age of generative AI.
As synthetic creativity becomes more sophisticated, the line between human and machine authorship continues to blur, forcing both creators and audiences to rethink what constitutes originality and truth in digital media.
From protection to empowerment
While the challenges of AI-generated disinformation are considerable, the workshop also highlighted examples that point to AI’s potential as a force for good. From the Global South, several initiatives demonstrate how technology can be used to empower rather than manipulate.
In Kenya, youth-led movements have been using AI to democratise access to information and amplify civic voices during protests against the Governement Finance Bill. These groups have developed AI tools to fact-check information, identify coordinated misinformation campaigns, and share verified data in real time. Their work shows how digital technologies can strengthen transparency, participation, and civic trust.
The workshops concluded with a reflection on resilience and collective responsibility in the digital age. Addressing disinformation, participants agreed, requires more than defensive awareness—it demands a cultural shift. Instead of viewing AI solely as a threat, it can be reframed as a civic tool that encourages dialogue, inclusion, and democratic participation.
Building resilience in the digital era is not just about protecting ourselves from harm; it is about reclaiming technology as a space for empowerment, shared truth, and collective engagement. By strengthening emotional and psychological resilience, especially among young people, societies can move beyond fear-based narratives and begin to harness AI for civic good.
Looking ahead, greater collaboration between educators, policymakers, technologists, and citizens will be key to ensuring that the same tools capable of spreading misinformation can also be used to strengthen democracy, promote digital literacy, and foster a culture of informed participation.