YouTube Brings Fact-Checking Displays to the US

It’s the latest step by the platform to combat misinformation

youtube logo with a skeptical emoji
YouTube, like other social platforms, is taking the thread of misinformation more seriously. Dianna McDougall
Headshot of Scott Nover

Key insight:

YouTube will introduce fact-checking panels next to dubious claims for users in the United States, the streaming service announced today. This feature has been live in India and Brazil since last year.

Like all platforms that rely on user-generated content, YouTube has long grappled with misinformation, including conspiracy theories, false political messaging and harmful health claims. Since the onset of the Covid-19 pandemic, the stakes of this fight have risen: The flow of reliable coronavirus-related information can be a matter of life and death. In turn, platforms including YouTube have elevated authoritative content from public health agencies like the World Health Organization and the Centers for Disease Control and Prevention. 

These agencies’ messaging has been virtually unavoidable in recent weeks, as the platforms have mostly granted them free ads and prime placement. YouTube has used information panels for promoting this content and, in the past, for directing users to Wikipedia and Encyclopedia Brittanica articles for long-standing issues like flat-earth conspiracy theories, the Google-owned company said.

YouTube’s fact checking relies on The ClaimReview Project, a tagging system developed by the Reporters’ Lab at Duke University. “Over a dozen U.S. publishers are participating today, including The Dispatch, FactCheck.org, PolitiFact and The Washington Post Fact Checker, and we encourage more publishers and fact checkers to explore using ClaimReview,” YouTube said in a blog post.

Facebook also employs fact-checking tags, but there are mixed opinions about whether these efforts are efficacious or cause users to double-down on their preconceptions. Facebook has also started flagging to users that they may have interacted with Covid-19-related misinformation. 

YouTube also announced it would donate an additional $1 million to the International Fact-Checking Network to “bolster fact-checking and verification efforts across the world.”

With the Covid-19 crisis, platforms seem to be taking misinformation more seriously.  “Where they were once loath to intervene in the affairs of their own algorithms, even when the potential harms to public health were clear, the arrival of a novel coronavirus put them in a newly interventionist mindset,” Casey Newton wrote in The Verge last month. 

But platforms are often fighting uphill battles against their own designs, which favor engagement over all else—including truthfulness.

“The entire structure of the system is set up in ways that unfortunately promote the spread of spectacular disinformation around crisis events,” University of Washington researcher Carl Bergstrom told Adweek in March. “The algorithms that are used for choosing what content we see have not been designed so that we see the most accurate content, but so that we see the most engaging content.”


@ScottNover scott.nover@adweek.com Scott Nover is a platforms reporter at Adweek, covering social media companies and their influence.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}