Disinformation in the Times of COVID
Disinformation in the Times of COVID
By Jonathan McCully, 14th June 2021
The question of how disinformation affects the handling of the COVID-19 pandemic is a complex topic that sits at the intersection of several human rights, including freedom of expression, the right to privacy, the right to non-discrimination and the right to health.
When defining the problem, we often make reference to an “infodemic,” a term used by the World Health Organisation itself to describe a global epidemic of misinformation that threatens public health.
That epidemic is particularly prevalent on social media platforms, which has led to an array of reflexive responses by those platforms designed to minimise the impact of misinformation while continuing with harmful online business models.
Given the importance of social media platforms in shaping public discourse, it is worth critically examining the self-regulatory responses of these platforms in contrast with the policing and placing of liability on individual users for “misinformation” and “fake news” that many nation states have adopted in response to the problem.
This raises the question of whether the narrative of penalising the speaker, i.e. the person who creates or shares misinformation, acts as a distraction for the human rights infringements of big tech platforms themselves, infringements that arguably long pre-date the pandemic.
Disinformation & misinformation
The spread of disinformation, hate speech, extremism and other potentially harmful content on social media is not a new phenomenon brought upon by the coronavirus outbreak.
Nonetheless, over the last 18 months, there has been noted concern around the increased dissemination of misinformation on the causes, symptoms and possible treatment of the virus. This includes the spread of myths (such as the coronavirus being a bioweapon) to the spread of bogus remedies (like eating sea lettuce or injecting yourself with disinfectant) and anti-vaccination rhetoric.
This includes the spread of myths (such as the coronavirus being a bioweapon) to the spread of bogus remedies (like eating sea lettuce or injecting yourself with disinfectant) and anti-vaccination rhetoric
These examples on their own do not amount to “disinformation,” which usually requires the additional element of knowledge of the falsity of the statement and intention to cause harm.
”Disinformation,” because of its potential for harm, may not benefit from full protection under the right to freedom of expression. However, how can we respond to this problem without putting freedom of expression and other rights at risk?
One response is to adopt laws regulating “disinformation.” However, this can itself have serious implications for the right to express yourself freely online. It is often said that “hard cases make bad law”. But the same can sometimes be said for the easy cases, namely those where everyone agrees that “something must be done.” The rise of coronavirus misinformation and disinformation is ripe for exploitation and abuse by certain political actors and governments, who can point to such phenomena simply to justify a clamping down on protected speech, such as criticism of government or reporting news, under the guise of protecting public health.
…how can we respond to this problem without putting freedom of expression and other rights at risk?
There are numerous examples of authorities attacking, detaining, and prosecuting critics of government responses to the coronavirus, as well as examples of peaceful protests being broken up and media outlets closed.
Since January 2020, over 24 jurisdictions around the world have enacted vague laws and measures that criminalise the spreading of alleged misinformation or other coverage of COVID-19, or of other public health crises. These laws effectively empower the state to determine the truthfulness or falsity of content in the public and political domain.
While this is happening, social media platforms, purportedly shielded from content liability through exemptions and “no monitoring clauses,” remain largely unaccountable for the human rights violations taking place through their services. This includes, on the one hand, their role in tracking and profiling users with a view to targeting content (including harmful content) at them – thus facilitating the efficient dissemination of disinformation. On the other hand, they are often surprisingly willing to censor speech that is protected under international human rights law.
Platform responsibility
Social media platforms’ responses to the “infodemic” have often revolved around promoting trusted content, and actively demoting likely misinformation on their platforms.
For example, YouTube added a banner, redirecting users to the WHO web portal on all videos that referenced COVID-19. Facebook altered their search function so that any user searching for topics related to coronavirus on their platform would be shown results encouraging them to visit the WHO website or national health authorities for the latest information.
YouTube has also decided to demonitise COVID-19 related videos and does not allow ads to feature on them. Meanwhile Facebook has banned ads and listings of any alternative cure for the virus.
These responses are significant, but they are also conspicuously focused on COVID-19 content. There is the risk that such measures are not applied to other forms of unprotected speech spread through their platforms.
Furthermore, these responses are very much on the platforms’ own terms, rather than pursuant to any specific legal obligation. Indeed, measures are often put in place specifically with a view to avoiding further regulation. In practice, this means that platforms have usurped the power to decide on the accessibility of online content without a specific legal framework that informs their decision-making processes.
…these responses are very much on the platforms’ own terms, rather than pursuant to any specific legal obligation
“Platform Power” and big tech platforms’ responses to the “infodemic” therefore highlight the need for a counter-framing that highlights how responses that involve penalising and criminalising “fake news” or “false information” are both insufficient and unnecessary. What is needed instead is a stronger framework for the protection of human rights with regard to online speech.
Platform power
When looking at platform power in this context, three key points become immediately apparent. These can be crudely distilled down to the problem of big tech platforms making the rules, making the tools, and making us fools.
Platforms Make the Rules
We know that platforms claim to benefit from generous exemptions from legal liability for content shared by third parties through their services. Perhaps to dampen public and political appetite for greater regulation in this regard, platforms have developed their own online speech laws through terms of service and community standards.
These policies set out what constitutes misinformation, hate speech, violence, bullying and other harmful content on their platforms.
Platforms first draw the policy lines and then adjudicate on whether these lines have been crossed. Facebook has pushed this even further by establishing its own private “content judiciary” in the form of the Facebook Oversight Board (often referred to as Facebook’s Supreme Court), which can deliver decisions on content that should be kept off or put back up on the social network.
This is in place of regulation by law, which would not only lend the process democratic legitimacy but would also have the benefit of enforceable rights and obligations that are backed up with independent and impartial oversight and enforcement.
Making social media platforms legally responsible for user content incentivises them to over-police and censor that content
One of the concerns often flagged in the context of intermediary liability for third party content is the risk of private censorship. Making social media platforms legally responsible for user content incentivises them to over-police and censor that content.
However, big tech platforms like Facebook and YouTube arguably already govern online speech. The issue is therefore more nuanced than a simple “publishing vs. not publishing” binary would suggest.
In addition to taking down content, platforms can recommend, suggest, optimise, shadow ban, and personalise content. This means that they are already manipulating and curating online content – often with their own commercial benefit in mind – with almost zero legal liability for the harm that might be caused by these measures, whether to freedom of expression or otherwise.
Many social media and similar platforms therefore arguably overstep a neutral, merely technical and passive role in relation to their content, meaning they may not be entitled to existing content liability exemptions. However, if we choose to apply laws seeking to regulate disinformation against these platforms, we still run up against free speech concerns.
Platforms Make the Tools
Platforms are also “making the tools” when they invest considerable time and resources into algorithms. Put rather simply, platforms like Facebook have sought to develop two types of algorithm: those that can maximise engagement on the platform, and those that moderate content.
In contexts where content liability is brought to bear on these platforms, they will often try to find a way to automate their response. And the direction of travel from lawmakers and the courts is increasingly to encourage automated content moderation.
…the direction of travel from lawmakers and the courts is increasingly to encourage automated content moderation
This reliance on automation has been heightened during the pandemic. In March 2020, as lockdowns and social distancing measures were being imposed across the world, workers (including freelance content moderators) were told to stay at home. YouTube, Twitter and Facebook, in response, all announced that they would become more reliant on automated content moderation during the crisis.
Machine learning algorithms present significant risks to human rights when they attempt to moderate content. There are many examples of studies demonstrating how machine learning algorithms can exacerbate or entrench existing biases, inequality and injustice.
Much of the bias and discrimination caused by automated systems results from them being trained on data that is unrepresentative or that contains built-in bias. This is also true when it comes to automated content moderation. For instance, Instagram’s DeeptText tool identified “Mexican” as a slur because its datasets were populated with data in which “Mexican” was associated with “illegal.”
Instagram’s DeeptText tool identified “Mexican” as a slur because its datasets were populated with data in which “Mexican” was associated with “illegal.”
Machine learning algorithms cannot read context, and when concepts like “disinformation” rely on assessments of knowledge and intention, this can make it difficult for such tools to make an accurate decision on whether to remove or otherwise moderate content. Not to mention their inability to read tone, and cultural contexts, that might transform the nature of the speech.
These shortcomings not only run the risk of violating free speech, but they also run the risk of causing other types of harm. In March 2020, Facebook was criticised for removing accurate information and articles on COVID-19 which the platform blamed on a “bug in an anti-spam system.” This was followed, a month later, with criticism that Facebook’s websites contained misinformation about health, which had attracted nearly half a billion views.
These statistics demonstrate a general problem with big tech platforms: toxic posts that escape content moderation algorithms will continue to be pushed and promoted by other algorithms.
Platforms Make us Fools
Framing the spread of misinformation and disinformation as an abuse of freedom of expression is a useful distraction for social media platforms, as disinformation is often a symptom of big tech’s business model. That model is less about ensuring media plurality and the protection of a “marketplace of ideas” and more about profiling, targeting, and optimisation of content delivery with the aim of capturing user attention to support the platform’s adtech revenue models.
Platforms like Facebook have developed sophisticated machine learning algorithms for targeting users with content precisely tailored to their interest.
As was highlighted recently by Karen Hao in the MIT Technology Review, this targeting is incredibly fine grained. The finer-grained the targeting, the better the chance of a click, which will then give advertisers more value for money.
However, these algorithms do not only apply to ad targeting. They can be trained to predict who would like or share what post, with a view to giving those posts more prominence in a person’s newsfeed. This supports and fuels a business model built on hooking users’ attention and keeping them engaged.
And it just so happens that people love to engage with toxic content. In 2018, Mark Zuckerberg himself admitted that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximise engagement reward inflammatory content.
Mark Zuckerberg himself admitted that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximise engagement reward inflammatory content
At the same time, social media platform’s parasitic business model of profiling, prediction and targeting is itself built on systemic violations of individuals’ rights to privacy and data protection.
All Facebook users have some 200 “traits” attached to their profile. These include sensitive data, such as race, political and religious leanings, socioeconomic class, and level of education. These traits are used to personalise experiences on the platform.
In conclusion, the resolution to the “disinformation” problem is not necessarily to be found in user liability or requiring platforms to take responsibility for moderating online content.
Instead, we must take a more holistic approach to the way in which a business model that is centred on optimising for engagement and profit facilitates violations of individual users’ human rights and negatively affects broader public interests like media plurality and democratic participation.
This means that we must hold platforms to account for all of their commercial activities – including their role in enhancing and preventing the dissemination of content and their use of users’ personal data as a means to support that content model.
For this we must look to legal frameworks that can actually provide accessible, efficient, effective, independent and impartial means by which human rights can be enforced and protected in relation to these commercial activities.