Threats to our democracy - preventing the next Infodemic

 
 
 

Join us at "Threats to our democracy - preventing the next Infodemic" on May 25th and 26th, as we tackle one of the most pressing issues facing European citizens today: the infodemic. Following the pandemic, the spread of misinformation, disinformation, and hate has been increasing, posing a severe threat to democratic participation and engagement among EU citizens.

During this session, we will delve into the multifaceted challenges that the infodemic brings to public life, including its impact on elections, health, migration policies, and more. Recognising the need for a proactive approach to mitigate the consequences, we aim to explore how we can protect the health and well-being of EU citizens in both online and offline environments.

The event will focus on envisioning the future of the European online landscape. We will address critical questions such as: What threats loom on the horizon? How will AI shape the landscape? How can we effectively combat (foreign) interference? And most importantly, how can we empower people to navigate the online challenges they face?

This thought-provoking session will bring together experts, policymakers, activists, and academics to engage in insightful discussions, share valuable insights, and propose innovative solutions. Together, we can foster a safer and healthier online environment for all.

Special session: Friday, May 26th, 11.15-12.30, Threats to our democracy - preventing the next Infodemic

Moderator: Jordy Nijenhuis, Dare to be Grey

Speakers: Giada Pistilli - Hugging Face, Stefan Manevski - Council of Europe, Peter Guštafík - PDCS and Zaneta Trajkoska - Institute of Communication Studies.

Notes from the conversation:

In the discussion on the new Infodemic project, participants acknowledged that the term "Infodemic" may be a trendy buzzword, but they emphasised the impact of the online landscape on democracies. The chaotic period of the pandemic had led to an abundance of misinformation and disinformation, creating uncertainty among societies. This situation was exacerbated by economic crises, the war in Ukraine, and other factors. One concerning aspect was the increase in hate speech narratives online, which particularly affected marginalised groups such as the Roma, travellers, LGBTQI individuals, and migrants.

The analysis also focused on hate speech during times of crisis, highlighting the ongoing "war of narratives." Bot networks were identified as a tool deployed to spread fake narratives, working hand in hand with disinformation. The lack of media literacy emerged as a significant issue, as there is no shortage of information available. In Slovakia, the proliferation of disinformation and propaganda related to the war in Ukraine posed a significant threat to democracy.

Amidst the overwhelming amount of information available online, all internet users found themselves inundated. The engagement of users in heated conversations was manipulated by actors behind the disinformation campaigns, making it difficult to trace back bot networks. The participants acknowledged the challenge of identifying the humans behind these networks.

Regarding the war in Ukraine, it was noted that large-scale influence campaigns in elections did not materialise. However, participants acknowledged that new approaches to online propaganda would likely arise, leaving us perpetually playing catch-up. To address this issue, investment in real media journalism, independent media, digital and communication literacy, and media literacy were deemed essential. Holding governments accountable and demanding transparency were also highlighted as crucial steps.

Despite some concerns and a less optimistic outlook for the future, participants recognised that AI and LLM (Language Model)-enabled threats may emerge while educators remain unprepared. The rapid development of AI cannot be slowed down, but it is important to subject it to scrutiny. Additionally, ethical and environmental questions regarding the energy consumption of LLMs were raised, and the companies responsible for developing such technologies should be held accountable.

The fear of false narratives and misinformation was acknowledged as toxic, and fingerprinting techniques were suggested as potential tools to mitigate deepfakes. Critical thinking was deemed essential, and the level of trust placed in internet content was questioned. Participants stressed the importance of developing institutions to monitor the protection of human rights while not impeding AI and tool development.

Public officials were seen as needing training to understand the consequences and potential for discrimination posed by these tools. Increasing awareness and education on digital technologies were deemed important, with existing initiatives focused on data literacy and media literacy across Europe serving as examples.

AI had already been deployed for various purposes, including the detection and analysis of online hate speech. Human experts' expertise was embedded in AI models through collaboration with programmers, leading to automation in healthcare and potentially personalised tutors in education. The benefits of AI were seen to outweigh the risks, but not all risks were being taken seriously.

Concerns were expressed about governments' preparedness to face the risks and potential damage caused by AI. It was suggested that more focus should be placed on how to use AI for good. The importance of reintroducing critical thinking and analytical writing in education, as well as cooperation instead of replacement with AI, was emphasised. Educating students on using AI as a supportive tool rather than a replacement for human interaction was seen as crucial.

The discussion also touched upon the need for consistent enforcement of human rights standards to combat discrimination and hate speech through legislation and policies. Participants noted a lack of commitment to enforcing these standards in Europe in general. A recent case involving France was mentioned, where the removal of hate speech content on Facebook was deemed not to infringe on freedom of expression.

To build resilience in the medium and long term, participants suggested focusing on media literacy, supporting vulnerable groups, and countering discriminatory narratives. They emphasised that addressing gender inequalities and discrimination should be a priority when developing various technologies, as deepfakes and porn not only target high-profile women but also ordinary women.

When the discussion turned to cognitive warfare developed by NATO as a response to Russia and online activities regarding the Ukraine war, panel members expressed unfamiliarity with the term. They cautioned against solely relying on technology as a solution, stressing the need to rebuild trust in institutions on a country-by-country basis. For instance, the disengagement of youth in Macedonia with important issues like elections and warfare in favour of platforms like TikTok highlighted the challenges of fostering trust.

The insights gained from the discussion on misinformation and disinformation have significant implications for health and the spread of COVID-19 disinformation. Just as societies were unprepared for the chaotic period of the pandemic, they were also unprepared for the infodemic that accompanied it. The abundance of misinformation surrounding COVID-19 created uncertainty and confusion, hindering public health efforts. Narratives of hate speech and discriminatory misinformation further compounded the issue, impacting marginalized groups who may already face disparities in healthcare access and information. It is crucial to address the role of misinformation in perpetuating health inequalities and ensure that accurate and reliable information reaches all communities.

The lack of media literacy highlighted in the discussion becomes particularly concerning when it comes to health-related misinformation during a pandemic. With no shortage of information available, individuals are susceptible to misleading claims and false narratives surrounding COVID-19, prevention measures, and treatments. This misinformation can have detrimental consequences, leading to non-compliance with public health guidelines, the promotion of unproven remedies, and the exacerbation of public health risks. To combat this, investing in media literacy programs specifically tailored to health information is essential in empowering individuals to critically evaluate the information they encounter and make informed decisions regarding their health.

Moreover, the use of bot networks and AI-driven disinformation campaigns discussed in the context of politics and societal issues can also be applied to the spread of COVID-19 disinformation. The rapid dissemination of false information through these networks can undermine public health messaging, sow doubt about the effectiveness of vaccines, and amplify conspiracy theories. Efforts to counter COVID-19 disinformation should not only focus on debunking specific claims but also address the underlying factors such as media literacy, transparency in communication, and accountability of tech companies. By doing so, we can work towards mitigating the harm caused by health-related disinformation and ensuring accurate information reaches the public during times of crisis.

In summary, the conversation revolved around the impact of the online landscape on democracies, the proliferation of misinformation and disinformation, the need for media literacy and critical thinking, the challenges and potential of AI and LLMs, and the importance of addressing discrimination and human rights standards in technology development.