Freedom of speech versus freedom of reach

October 25, 2021 — DARE TO BE GREY & WOLF J. SCHÜNEMANN

 

LONG READ: The societal struggle of regulating hate speech and disinformation in the online world.

In a world that sometimes seems to be consumed by hate and anger, it is often surprising that some of the comments made online are not regulated in ways they would be if made offline. With disinformation narratives, anti-vaxx conspiracy theories, and online toxicity contributing to the increasing polarisation in societies around the world, the negative consequences of unregulated social media platforms are all around us. Some might say that regulating speech online is an infringement on freedom of expression, however as social media gives users the opportunity to be heard by millions around the world, should this freedom also equal the freedom of reach?

Wolf J. Schünemann is a German political scientist with a research focus on online communication and the governance of digitalisation. During an interview with him, he gave us a better understanding of how we can deal with these issues whilst still upholding the rights to free speech.

Is there tension between democracy and free speech?

Schünemann: Tension is perhaps not the right word to use, as they are kind of twins, or at least closely related. Democracy depends on free speech as one of its fundamental pillars. If we expect a democracy to have organised, free elections, the word ‘free’ is very relevant and includes freedom of political conversation. 

Thus, at a very substantial level, a liberal democracy depends on free speech. That does not mean that every speech needs to be protected in the same way. But the more speech is related to the political process, the more it needs to be protected in order to guarantee this liberal fundament of democracy.

 
Democracy depends on free speech as one of its fundamental pillars
 

So there is not a tension between free speech and democracy. But there is actually a tension between democracy and every measure that interferes with free speech, at least in a political context. When we talk about the challenges of our time, we can certainly agree that free speech protection is an issue, because content regulation is an issue. And that the regulation of speech is an issue right now as we observe such a rise of disinformation and hate speech. So, we have this as a societal challenge that is not easy to address. 

How would you deal with disinformation in this respect then?

Schünemann: Disinformation, in particular, is problematic. We can observe this everyday. For hate speech, we can create more or less clear markers at a linguistic or semantic level that we can use to define what is hateful content, and what is not. With disinformation, it is much more complicated, because you would not have these indicators. There may be some formulations at the linguistic level that allow you to look more closely at certain discursive events, but to decide whether this or that is disinformation is very difficult, and has fewer rules and guidelines to follow. Then there is the next difficult question, namely whether the judgement is based on a point of view, or whether you have any criterion of truth that you can apply. 

 
While this will most certainly not be the case for most of the conspiracies and fake stories that are currently underway, it is nevertheless problematic if a ‘regime of truth’ tries to fight opinions in a society.
 

Let me give you one example. It stems from the beginning of the Coronavirus pandemic. Dr. Li Wenliang, who made the first findings related to the Coronavirus, was accused and then punished because of spreading ‘disinformation’, actually correct information about a virus that the Chinese government was not ready to accept. Unfortunately, he died himself from Coronavirus so this is a tragic story. The example shows that what is perceived by many as not corresponding to truth, might however become relevant knowledge at a later date. While this will most certainly not be the case for most of the conspiracies and fake stories that are currently underway, it is nevertheless problematic if a ‘regime of truth’ tries to fight opinions in a society. There might of course be higher goods to be protected, like public health, which then justify interference with free speech. Yet, it always remains a difficult approach, at a level of principle.

It is not that a democracy cannot regulate speech, it can, but it needs to be aware of this fundamental problem, and these fundamental questions that arise regarding its own qualities in liberal democracy. 

 
There might of course be higher goods to be protected, like public health, which then justify interference with free speech.
 

So how can governments regulate speech?

Schünemann: It's not only about governments here, maybe the government is not the best actor or the first actor to think about. Actually, ministries find themselves in a situation where they’d like to say, ok, as a democratic government, we do not really want to do anything. This should be regulated by civil society or by social media platforms. 

What governments can do, and should do actually, is to set a framework for other actors to do the actual work on the ground, so that they can act from behind and accept that there is a red line they cannot cross. They should thus create a legal framework where a free competition of ideas can still happen with some general rules that apply. Moreover, governments need to step in where they find artificial amplification of disinformation in such a way that it would hurt societies. They should require social media platforms at least to be transparent about their amplification mechanisms. I think these are steps that governments can take as long as they do not discriminate against certain content and in fact increase the transparency and accountability of content moderation practices as exerted by the platforms. 

 
It’s not so easy as an academic, or as an individual person, to agree with the line we put in place. We are in a pluralistic society, so there are different lines to be drawn. And I think it’s very important that we somehow discuss these things.
 

State-enforced content-regulation or platform-specific content moderation might still cross a line. But this should be decided and assessed by judges. They would have to discuss, maybe from case to case, whether there is substantial and valid justification for removing content. Moreover, the development of law and the rulings of judges happen in a discursive background of a given society, where we all need to defend our system, our values, and need to somehow approximate such red lines. 

It’s not so easy as an academic, or as an individual person, to agree with the line we put in place. We are in a pluralistic society, so there are different lines to be drawn. And I think it's very important that we somehow discuss these things.

How do you think social media platforms have reacted to disinformation and hate speech? Do you think they need to be doing a better job?

Schünemann: They definitely have to do a better job! They are economic actors, but also media companies. Because of this, the U.S. especially has given them the opportunity to make the best of both worlds. The platforms can amplify content and remove it as they see fit, without having to comply with broader liability rules. Platforms like Facebook and Twitter are too powerful, and too big, so if you give an economic actor like them the best of both worlds, then of course they will take it. 

For the big social media companies there needs to be change, and this needs to be done by legislation, since they won’t do it themselves. It won't be easy, but it needs to be done and we cannot wait for them to react. I think we need an ethically appropriate approach to make transparent regulations for social media companies, instead of just enforcing it. The government should be accountable for the framework of legislation. And this framework might include that they make the social media companies of a certain size liable for the content. 

 
Platforms like Facebook and Twitter are too powerful, and too big, so if you give an economic actor like them the best of both worlds, then of course they will take it.
 

Do you feel that deplatforming, like the ban of Donald Trump by Twitter, is a good response by these platforms?

Schünemann: No I don't think so. That is a concrete decision that I would challenge, I think this was not a good example of how hate speech should be purged, for many reasons. First of all, it was Twitter that made the decision, which is already problematic, as it was a single company deciding to ban the President of the United States from an important online communication channel. Second, Twitter did it in a moment when they realised they had more to gain from removing his Twitter account than they had before from leaving it open. Twitter has gained a lot of attention and reach through Trump, and vice versa. Finally, I think this was a heavily problematic decision, also with respect to the public figure that Donald Trump is (or was). Discursive events he has produced need to be open and available in public discourse. They can even be seen as historical documents. I think there's no way to somehow regulate the content that is produced by a President in office, communicating to the people. Even if this is hate, or extremism. There's just no way other than looking at the speech that was held, and letting a court decide whether this was illegal. 

So, take messages down if they do not comply, but do not just deplatform a user, especially not if it is a public figure or politician like Trump. That was, I think, a mistake that might even radicalise his movement. Although I see that the amplification and reach that Trump had with these kinds of messages could devastate American democracy, there need to be other mechanisms to be resilient. 

To sum up and formulate some basic guidelines: If content regulation is happening, it should be made transparent in a democratic society. Make it clear, try to stay fair, open, and gentle so you don't miss anything. Making regulation transparent with the help of platforms is also key, as they need to cooperate and show what is taken down and who has reported it.

 
Twitter has gained a lot of attention and reach through Trump, and vice versa. Finally, I think this was a heavily problematic decision, also with respect to the public figure that Donald Trump is (or was).
 

There is a big difference between on- and offline. It has to do with platforms, it has to do with online publishing opportunities for everybody. Everyone has the potential to reach thousands with their online comments. If you spread hate speech offline to your 100 friends, then it's just 100 friends and maybe 50 of them don't want to be your friend afterwards. But if you have such a platform which really produces cascades of hate speech or conspiracies, being amplified by the algorithms, this is another thing.


 

The content of this website represents the views of the author only and is his/her sole responsibility. The European Commission does not accept any responsibility for use that may be made of the information it contains.

 
 
Stichting Dare to be Grey