How do social media platforms Twitter, Facebook, Google, and YouTube curate COVID-19 information and filter out misinformation? What might these published protocols reveal about blending human and automated processes to ensure reliability during the pandemic? As one observer notes, “We’re just realizing the human power behind the highly-trumpeted AI-powered solution at one of the worst possible times.”
The serious consequences resulting from COVID-19 misinformation, from swallowing cleaning products to attacks on 5G towers, has meant that search and social media platforms have felt compelled to double their efforts at publishing and distributing the most reliable and accurate information. For example, according to their own public statements in 2020, Twitter, Google and other platforms are creating temporary COVID-19 information sections that often rely on more human decision making to either create additional framing for automated processes or curate content.
Google News highlights “topical experiences“
Google’s COVID-19 experience diverges from the search platform’s usually fully-automated ranking and recommendation search format, and now includes human curation through “topical experiences.” A topical experience is one of the few sections where the Google News team will frame the news experience by structuring and highlighting content.

In the case of Google’s COVID-19 news section, this framing includes structuring content by topic and region as well as highlighting local news and Tweets from local health authorities.
As Google has indicated, the idea behind these curation decisions is to provide context around the coronavirus in a nod to the particular importance of providing accurate information during the pandemic.
Twitter Relies on Human Curation COVID-19 “Moments“

Twitter’s section devoted to COVID-19 information appears as a tab in the Explore section of the general feed that brings together customized “Moments“. Twitter Moments are designed by Twitter’s curation team, and follow specific guidelines, such as rules related to bias, accuracy and standards. The curation team uses these guidelines to organize and feature content from public health experts and journalists that the platform identifies as reliable.
Platforms Broaden Definition of “Harm” When Distributing COVID-19 Content
The main concern for platforms when curating, publishing, or distributing content is not just elevating quality COVID-19 news, but also filtering out “harmful” misinformation all across their services. Because of potentially deadly consequences, platforms have expanded their misinformation policies to treat COVID-19 misinformation as “harmful content.”
As the examples below demonstrate, platforms use a mix of machine and human content intervention to identify and mitigate harm. However, in the context of the pandemic, there are some situations when each platform has stated they will use slightly different protocols to rely more on human intervention in order to define, identify, and ideally mitigate harm.
Twitter’s Approach to Defining “Harm”
Under the platform’s updated COVID-19 policy, Twitter expanded its definition of harm to include content that goes against public health authorities, including but not limited to harmful cures, denial of established scientific facts, and conspiracy theories about the origins of the virus.
Twitter has expanded its misinformation policy to cover any content that does not conform to world and national health authorities, and the social media platform then uses human teams to review specific violations. Twitter’s general rules and policies allow users to report violations for content that “incites” or “induces” harm across a number of different categories, including self-harm, violent threats, and violent imagery.
While content moderation and removal is a largely automated process, Twitter states that the human team is able to provide more “context and insight” around COVID-19 in ways that machines cannot. The platform indicates that Twitter employees manually review reports for sensitive topics that require additional context. Twitter also relies on trusted partners — such as local health authorities — to flag potentially harmful content.
How YouTube Addresses “Harm”
YouTube has also expanded its own harmful content policies to include medical misinformation related to COVID-19. YouTube does not indicate additional human intervention for COVID-19 information curation. Instead, the video platform relies primarily on an expansion of what YouTube defines as “harmful” content.
YouTube’s general policies related to violent or dangerous content include anything from hate speech to graphic content. Its COVID-19 medical misinformation policy specifies that the platform will take action against content that contradicts local and global health authorities as well as any content that may encourage the use of unproven treatments.
While YouTube has always relied on a blend of human and automated processes to review harmful content, current health workplace restrictions have led to a greater reliance on machines to moderate content. However, like Twitter, YouTube states their automated systems are not as “accurate or granular” as their human counterparts.
Facebook’s Reliance Third-Party Fact Checking to Identify “Harm“
Facebook addresses misinformation and harmful COVID-19 content through its work with third-party fact-checking organizations. If one of the 60 fact-checking organizations Facebook works with rates a piece of content as false, the platform indicates it will add warning labels, and reduce distribution of that content. To date, Facebook has also removed misinformation that “could lead to imminent physical harm.”
So far during the pandemic, social media and search platforms state the unprecedented time of COVID requires elevating the most accurate, authoritative and reliable information rather than relying on whatever their algorithms might be automatically recommending.
By turning to human curation, platforms may be tacitly indicating that humans are a much bigger part of the reliability equation when it comes to elevating quality information. And through their COVID-19 policies, many platforms are explicitly recognizing that machines are not able to provide the same nuanced context around critical health related information.
Is Human Curation and Content Moderation Not Only Ideal, but necessary?
Would more human curation, coupled with more human-powered moderation, lead to more reliable information overall, and not just for COVID-19? Will the current hybrid approach lead to more reliable information?
According to journalist and commentator Chris Stokel-Walker, “We’re just realizing the human power behind the highly-trumpeted AI-powered solution at one of the worst possible times.”