Societies are increasingly being infiltrated with lies and false information, as the coronavirus pandemic has given rise to an "infodemic." How can the media and the public avoid drowning in a flood of disinformation?
In Germany, 11% of people believe in a global conspiracy theory – that’s the conclusion of a recent study by a political think tank, the Konrad Adenauer Foundation. In the survey, two percent of people also said that the claim that humans have caused climate change was "certainly wrong," while six percent said it was "probably untrue."
These kinds of conspiracy narratives are nothing new, but the rise of the digital age has not only produced a greater quantity of information, it has also led to the rapid and dangerous growth of disinformation on the Internet. Since the start of the coronavirus pandemic, the flood of misinformation – sometimes even through orchestrated campaigns – has increased to such an extent that bodies like the World Health Organization and the European Union are warning of an "infodemic."
How can carefully researched journalism get the upper hand against targeted disinformation campaigns in the digital sphere? This question was the focus of a session during Deutsche Welle‘s Global Media Forum, taking place online this year due to the coronavirus pandemic.
The discussion was hosted by DW’s news anchor Ben Fajzullin.
Disinformation as a targeted political tool
"Disinformation is produced in a coordinated manner with a very clear target. The goal is to distract the society, to attack the institutions by decreasing the trust of the people towards our institutions in Europe," said Věra Jourová, the European Commission Vice-President for Values and Transparency.
"So we are speaking not only about the battle in the field of freedom of speech, but also its security aspect."
According to Jourová, in addition to the main sources of disinformation campaigns – Russia and China – other global players, such as Turkey, have also started to engage in practices that erode trust in the European public sphere.
"We are not able to say that they are managed by governments, but they are produced on the territory of these countries" the Commission Vice-President said.
It is estimated that China spent nearly US$10 billion on presenting a China-friendly image abroad in 2017; Russia pumps more than €1 billion annually into a network "which includes media operating globally and frequently spreading disinformation," Jourová said.
There are also countries within the EU where the Commission "would like to see stronger independent media," she added, voicing her support for "(media) which are able to counter the official version of the truth."
In EU member states like Hungary, political influence over media content has grown in recent years.
In a society where freedom of the press and free speech are highly valued, it is particularly difficult to set limits on disinformation: "We've got to find a balance between securing the stable democratic systems we have in Europe on one hand, while also protecting freedom of expression," said Tobias Schmid, Director of the Media Authority in the German state of North Rhine-Westphalia and Chair of the European Regulators Group for Audio-visual Media Services.
To this end, the operators of the biggest social media platforms have taken initial steps to combat disinformation and make it easier for users to judge which sources are trustworthy. The video-sharing platform YouTube, which belongs to Google’s parent company Alphabet, has begun providing information on major channels, while Facebook and Twitter have started flagging misleading posts – even when they come from the US president. Tobias Schmid praised the platforms for these initiatives but said the rules should be legislated:
"It's not a solution to just to ask companies to implement standards."
Two new EU directives aimed at tackling these issues are in the pipeline, explained Vice-President Jourová: The "Digital Services Act" is intended to draw up legal guidelines for service providers and, among other things, make platform operators liable for illegal content like extremism, child pornography or hate speech. The second directive, the "European Democracy Action Plan," aims to establish a clear definition between information and disinformation.
Schmid explained the fact that there has been no effective regulation in the digital sphere so far by saying that in the past decade, the issue of disinformation did not exist in its current dimensions, and that it has only become a focal issue in social media in the last two or three years.
The best known examples of politically-motivated disinformation campaigns however go back some years: The intelligence committee of the British parliament is sure that Russia was involved in the 2014 Scottish independence referendum – most probably also in the Brexit referendum in 2016. It is also certain that in the same year, Russia tried to influence a large number of social media users in the US to support the election of then-presidential candidate Donald Trump.
Disinformation during a global pandemic
The panel also discussed that disinformation about the coronavirus pandemic had become particular widespread in the past few months. China has spread propaganda in Europe, for example, claiming that an authoritarian style of government could contain the pandemic more effectively than democracy, said Jourová.
The panelists also considered a growth in the momentum of conspiracy theories about the coronavirus to be a serious threat. However they did not mention the messaging service Telegram, a hub of conspiracy theories surrounding the coronavirus pandemic.
GMF compact: Rumors about Bill Gates, COVID-19
Aside from conspiracy theories, there are often quite simple reasons why users spread certain content that may not be entirely based on truth and facts. Herman Wasserman, Professor of Media Studies at South Africa’s University of Cape Town, has shown in studies in a number of countries in sub-Saharan Africa that many users shared posts simply to connect with others and to express their feelings.
"Some of it also has to do with a misplaced sense of civic duty," said Wasserman. "So if there is a warning coming through social media, then people are likely to share it because they want to warn others, even if they don't believe it themselves." Better media literacy would help to combat this, he added.