Will the Pandemic Accelerate or Curb Misinformation on the Internet?

Almost two years into the pandemic, new initiatives to fight online misinformation continue emerging.

Disinformation after COVID 19
 

FUTURE PROOF – BLOG BY FUTURES PLATFORM


Social media platforms were already under increasing pressure to curb misinformation before 2020, and Covid-19 only added fuel to the fire. The atmosphere of uncertainty, the increased online activity and the ensuing information overload created the perfect conditions for misinformation to spread. Still, Futures Platform’s futurists anticipate that the Covid-19’s amplifying impact on misinformation will be a short-term one. In fact, the pandemic may even mark a turning point in social media platforms’ content moderation policies and significantly curb online misinformation in the long run.

 

Disinformation as a trend

As a trend, online disinformation is not new. Supercharged by the internet and social media algorithms, the phenomenon first emerged into the mainstream following the Cambridge Analytica scandal of 2018, where it was revealed that disinformation disseminated through Facebook advertisements might have influenced the outcome of the 2016 US presidential elections.

By design, social media platforms create vacuums of information through content that’s served to audiences based on personalisation and engagement. As the content on these platforms are user-generated and not fact-checked, it naturally gives way to misinformation.

This has also led to the birth of the industry of disinformation for hire, which is silently booming and growing more sophisticated with new technologies. For example, programs can batch-generate convincing-looking fake profiles, personal data can be bought in the dark web, and AI’s ability to generate compelling messaging is improving by day.

According to an Oxford University study, there are at least 65 identified commercial firms providing disinformation in 48 countries as of 2021. The numbers doubled from 2019 to 2020, which demonstrates the strong growth trajectory of the phenomenon.

Considering all these, social media platforms were already under increasing pressure to curb misinformation before the pandemic. However, prior to 2020, they took a less proactive approach and were reluctant to take on editorial responsibilities. Facebook, for example, continued to allow misinformation in political advertisements under their commitment to protecting freedom of speech.

 

Pandemic inspires new measures to combat online misinformation

“It hardly was a surprise that the pandemic seems to have caused all kinds of disinformation to spread,” says Max Stucki, Foresight Analysis Manager at Futures Platform. “After all, uncertain times cause rumours to run wild, and this creates fertile soil for information operations.”

But as the pandemic promoted further understanding of the dangers of online misinformation, social media platforms took unprecedented steps to address the issue.

Facebook started to publish detailed reports on the identified and suspended misinformation campaigns, and Twitter broadened its definition of “harm” to include promotion of content that “goes directly against guidance from authoritative sources of global and local public health information”.

Tweet warning on Twitter Covid-19

Twitter has introduced new labels to inform audiences that tweets may contain misleading information. Source: Twitter

Almost all social media platforms have also ramped up collaborations with trusted health authorities and governments to fight misinformation. For example, Facebook now allows national health ministries and other international non-government organisations to advertise accurate Covid-19 information free of charge.

 

The future of information and digital media literacy

The Covid-19 pandemic may mark a turning point in how online media outlets, regulators, and the public perceive and respond to misinformation online. Considering that a significant percentage of people get their news straight from social media platforms today, social media outlets will likely take increasingly more active roles in disseminating fact-checked information in the future.

In addition to technical measures like algorithmic transparency and banning automated bot accounts, regulatory bodies may also increasingly focus on supporting independent media and regulating political advertising in online media landscapes. For example, the European Commission has a new initiative to plan future regulatory responses to disinformation based on the learnings from the Covid-19 pandemic.

We may also see more emphasis and cross-sector collaborations working with digital media literacy in the years to come. Schools, for example, may start including social media literacy courses in curriculums, and social media platforms may adopt new practices, such as automated reminders for users to fact-check information before sharing.

On the other hand, if the efforts to curb misinformation ceases to be a priority once the current crisis subsides, misinformation will be a growing threat to democracies and public safety worldwide. It may further polarise societies, undermine scientific efforts to control epidemics, and even lead to the return of previously eradicated illnesses.

“Disinformation will not go away, but the ways it spreads need to develop faster than the mechanisms created to stop it. If the most popular social media platforms become more and more advanced in fact-checking, either the disinformation needs to become more subtle or find alternative platforms,” Max Stucki concludes.


Stay on top of the latest change signals and trends that you need to know about with a Futures Platform subscription

Do you already have a Futures Platform account? Login here

 

RELATED


 
Previous
Previous

The Future of Healthcare is Virtual and Omnipresent

Next
Next

The Future of Retail in the Post-Covid Era