Foreign Information Manipulation in the Age of AI
On October 2, Utrecht University hosted the workshop Russian Interference & Disinformation, bringing together students to explore how generative artificial intelligence (GenAI) has transformed the landscape of foreign information manipulation and interference (FIMI). The session examined how low-cost, scalable, and highly targeted AI technologies are now being weaponised to manipulate public opinion, polarise societies, and weaken democratic institutions.
The new face of information warfare
According to the European External Action Service (EEAS) Threat Report 2025, FIMI campaigns aim to destabilise societies, damage democracies, and undermine the EU’s global standing. In this context, generative AI has dramatically expanded the reach and speed of disinformation. Once a costly and labour-intensive operation, influence campaigns can now be automated - producing multilingual fake news, fabricated images, and AI-generated personas at almost no cost.
Participants explored the Malicious Use of AI (MUAI) framework, which highlights how state and non-state actors exploit AI for large-scale manipulation and emotional influence. Deepfakes, AI-bots, and narrative automation are being deployed to exploit cognitive biases, amplify polarisation, and erode trust in information systems. As Modern Diplomacy notes, this form of “digital weaponisation” is not only technical but also psychological, playing on our emotions and fears, and our natural inclination toward confirmation bias.
Pravda Network: disinformation at scale
A central case study of the workshop focused on the Russian Pravda network, a vast disinformation ecosystem that has infiltrated global media and even AI systems. Originally launched in 2014 and later expanded as part of Russia’s war against Ukraine, Pravda consists of a network of over 150 fake local news sites operating in 49 countries.
These sites recycle and amplify pro-Kremlin narratives drawn from Russian state media, Telegram influencers, and local anti-Western channels. According to a NewsGuard investigation, in 2023 alone, the network published over 3.6 million articles, many of which were absorbed into popular AI chatbots such as those developed by Microsoft, Google, and OpenAI.
Participants discussed how Pravda operates as an “information laundromat”, a process known as information laundering. This strategy spreads false or distorted narratives through seemingly legitimate outlets - think tanks, media blogs, or Wikipedia references - giving them an appearance of credibility. By flooding the information space with contradictory messages, Pravda doesn’t aim to persuade, but to confound. Its goal is to exhaust attention, blur truth, and erode Western support for Ukraine.
The Romanian elections: A case of coordinated manipulation
The workshop also examined a second case: the 2024 Romanian elections, where an AI-enhanced disinformation campaign was used to influence voter perception. According to a Cyabra investigation, the operation relied on fake accounts, automated bots, and hashtag hijacking to artificially amplify far-right narratives and suppress discussions of Russian interference.
Researchers found that around 16% of X (Twitter) accounts involved in election conversations were fake. Even more strikingly, one in three interactions (34%) supporting right-wing candidate Călin Georgescu came from inauthentic accounts. These coordinated efforts generated a distorted sense of popularity and legitimacy, narrowing the space for factual debate. Students reflected on the growing challenge of distinguishing between authentic and synthetic content.
Beyond fear, building digital resilience
The discussion concluded by unpacking common manipulation tactics such as distraction (shifting attention from real issues) and division (fueling internal conflicts). Participants expressed concern over people's short attention spans and doomscrolling, leading to less trust in news.
However, the tone of the workshop remained forward-looking. Rather than framing AI solely as a threat, discussions emphasised the need for digital literacy, transparency, and civic empowerment. Strengthening resilience means not only identifying deception but also building systems that foster trust, dialogue, and participation.
AI technologies are not inherently malicious - it is how they are used that matters. In the same way that disinformation can spread faster through automation, so too can truth and civic engagement - if citizens are equipped with the tools and awareness to navigate this new reality.