Facebook seems to be catching more flak than Twitter on the issue of fake news, but that didn’t stop Colin Crowell, Twitter’s vice president of public policy, government and philanthropy, from being proactive.
Crowell penned a blog post outlining Twitter’s approach to handling manipulative bots and fake news and detailing the challenges of doing so in a real-time environment.
He addressed how Twitter’s real-time nature actually acts as an ally in the process:
Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information. This is important because we cannot distinguish whether every single tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth. Journalists, experts and engaged citizens tweet side-by-side correcting and challenging public discourse in seconds. These vital interactions happen on Twitter every day, and we’re working to ensure we are surfacing the highest quality and most relevant content and context first.
And on Twitter’s battle versus the bots, he wrote:
While bots can be a positive and vital tool, from customer support to public safety, we strictly prohibit the use of bots and other networks of manipulation to undermine the core functionality of our service. We’ve been doubling down on our efforts here, expanding our team and resources and building new tools and processes. We’ll continue to iterate, learn and make improvements on a rolling basis to ensure that our tech is effective in the face of new challenges.
We’re working hard to detect spammy behaviors at source, such as the mass distribution of tweets or attempts to manipulate trending topics. We also reduce the visibility of potentially spammy tweets or accounts while we investigate whether a policy violation has occurred. When we do detect duplicative or suspicious activity, we suspend accounts. We also frequently take action against applications that abuse the public API (application-programming interface) to automate activity on Twitter, stopping potentially manipulative bots at the source.
It’s worth noting that in order to respond to this challenge efficiently and to ensure that people cannot circumvent these safeguards, we’re unable to share the details of these internal signals in our public API. While this means research conducted by third parties about the impact of bots on Twitter is often inaccurate and methodologically flawed, we must protect the future effectiveness of our work.
Image courtesy of Aleutie/iStock.