What Happens When Politicians Break the Rules on Social Media?

A platform-by-platform guide following Twitter and Facebook’s divergent Trump policies

Twitter recently restricted a Trump tweet, placing pressure on all other social media companies to think about how they'll handle the president's posts going forward. - Credit by Getty Images
Headshot of Scott Nover

Key insights:

Twitter and Facebook executives have been at opposite ends of a crucial debate over the acceptability of President Donald Trump’s posts.

Their interpretations of the same message call into question how other social media platforms will consider the president’s posts moving forward and what it means for private users with accounts on those websites, particularly after Trump signed an executive order attempting to curb platforms’ liability protections.

Consider Trump’s message from last week, when he posted that “when the looting starts, the shooting starts,” in response to the George Floyd protests in Minneapolis, Minn. Facebook left it up. But Twitter said it violated rules about “glorifying violence” and placed a “public interest notice” on it that required users to give it additional click to view.

Across platforms, these messages could be treated differently if posted by private users than if they’re published by public officials.

So why are public officials treated differently? Which posts could be left up and what could warrant them being taken down?

Here’s a guide to navigating social media companies’ policies toward content like Trump’s controversial posts, the challenges that presents and what the future could hold:

Twitter

Policy: Twitter has robust rules on its website about inciting and glorifying violence, neither of which are allowed for average users. However, the platform thinks it may be in the public interest for people to see some posts if government officials are the violators. But if a government official violates these rules, the site will consider leaving them up and add a public interest notation.

Twitter introduced these public interest notices in June 2019 as a way to flag tweets from public officials that violate site rules. “We may allow controversial content or behavior which may otherwise violate our rules to remain on our service because we believe there is a legitimate public interest in its availability,” Twitter says on its website. “When this happens, we limit engagement with the tweet and add a notice to clarify that the tweet violates our rules, but we believe it should be left up to serve this purpose.”

The notices place the offending tweet behind an interstitial, though users can still click to view it. It also restricts the ability to like, retweet or share that tweet and will not be “algorithmically recommended” by Twitter. “These actions are meant to limit the Tweet’s reach while maintaining the public’s ability to view and discuss it,” Twitter policy says.

Similar action was taken in regards to Trump’s tweet about the protests last week, removing the amount of engagement it received. It’s not clear how long the post was up before a notice was added. 

This week, another Republican who violated Twitter’s rules didn’t have his tweet removed, but received a public interest notice. Like Trump’s tweet about shooting protesters, a tweet from Rep. Matt Gaetz (R-Fla.) was cited for glorifying violence. Gaetz wrote “Now that we clearly see Antifa as terrorists, can we hunt them down like we do those in the Middle East.” and was hidden from view unless users clicked on it.

The rule has also been enforced outside of the U.S., when in April, Twitter applied a public interest notice on a tweet from Osmar Terra, the Brazilian minister of citizenship, for spreading misinformation related to the spread of Covid-19.

Twitter says it will also remove a tweet if public officials threaten or promote terrorism, violence against a group or individual, exploit children, encourage suicide or self-harm or share an individual’s private information.

Challenge: By putting these notices on Trump’s tweets, Twitter has opened Pandora’s box. Now, every tweet from public officials, including the president, will be held up against this precedent. Even if public officials do not violate the most serious rules where the interstitial would apply—such as glorifying violence—they could see other modifications like the fact-check labels Twitter placed on Trump’s tweets about mail-in ballots. 

Outlook: After years of inaction, Twitter is now weighing in on Trump’s feed which makes the company act like a publisher, responsible for retaining editorial integrity. As a private company, they can make editorial decisions—even about their president’s posts—free of government interference. But, how they make these individual calls will be highly scrutinized. 

Facebook and Instagram

Policy: There are no rules from Facebook or Instagram—which Facebook owns—that bar the glorification of violence. However, the platforms do outlaw hate speech and incitement of violence. “We remove content, disable accounts, and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety,” Facebook rules state. Just this weekend, Twitter removed the account of a white nationalist group posing as Antifa for inciting violence.

In light of Twitter’s decision, Facebook CEO Mark Zuckerberg became publicly, vehemently opposed to modifying Trump’s posts in any way. In response, Facebook employees posted public condemnations of Zuckerberg and participated in a virtual “walkout.” Others even resigned.

Facebook has previously removed accounts who engaged in hateful or violent rhetoric including Infowars conspiracy theorist Alex Jones and former Breitbart editor Milo Yiannopoulos. The company recently reported that it removed 4.7 million pieces of content “connected to organized hate” in the first three months of 2020—an effort it said was more concerted after the Christchurch mosque terrorist attack in New Zealand was livestreamed on Facebook. 

Facebook has expressed no public desire to moderate the content of public officials or world leaders. However, in Zuckerberg’s lengthy post Friday evening, he said that while he feels Trump’s particular post about protesters does not violate rules about inciting violence, any post that did do so would warrant removal—even from Trump.

“Unlike Twitter, we do not have a policy of putting a warning in front of posts that may incite violence because we believe that if a post incites violence, it should be removed regardless of whether it is newsworthy, even if it comes from a politician,” Zuckerberg said.

Challenge: There’s internal strife over Zuckerberg’s decision. Many Facebook employees publicly condemned Zuckerberg’s response to Trump’s post, calling the post racist and clearly inciting violence. Design manager Jason Stirman, for one, tweeted that he “completely disagrees” with Zuckerberg’s decision. “There isn’t a neutral position on racism,” he said. 

Outlook: Facebook—a company that has come under fire for allowing the incitement of genocide in Mynamar and violence-inciting accounts linked to the Duterte regime in the Philippines—is now feeling the heat in the United States. But Zuckerberg has made no assurance that he will take action against Trump’s posts. As the election nears, it’ll be a waiting game to see if Trump continues to push the boundaries of what is acceptable on Facebook.

Reddit, Snapchat and TikTok

Policies: Trump only posted the controversial message on Twitter and Facebook, but what do other social media platforms say about this kind of content?

Trump made a brief foray onto Reddit for an Ask Me Anything (AMA) session in 2016, months before he was elected president. r/The_Donald, the subreddit that hosted the session, has since been quarantined—meaning that users who visit the page have to opt-in to view it—for violent comments.

Reddit is split up into subreddits, various communities organized by interest, and are moderated by appointed users. Subreddit moderators create their own set of rules for the subreddit and can remove posts or ban accounts based on those rules. However, Reddit as a platform has its own set of rules, which each subreddit must adhere to.

One of those policies prohibits “content that encourages, glorifies, incites or calls for violence or physical harm against an individual or a group of people.” Like Twitter and unlike Facebook, Reddit says that glorifying violence is an offense.

Trump is more active on Snapchat, though it has no central news feed. That limits the spread of information beyond an account’s followers.

Snapchat’s community guidelines say users cannot “threaten to harm a person, a group of people or someone’s property” or “encouraging violence.” The platform also announced today that it would not promote Trump’s account in its Discovery feed because of the president’s language on other social media sites.

While Trump does not have an account on TikTok, the fastest-growing social media platform in the United States, it’s also worth taking a look at its policies. The platform prohibits content that “attacks or incites violence against an individual or a group of individuals” based on race, ethnicity, sex or gender. 

Challenge: These platforms’ policies on Trump’s messages will also be further criticized after the precedent set by Twitter. 

Outlook: It remains to be seen how these platforms—and others—would treat rule-breaking messages from public officials, especially since Trump, the most likely offender, mostly depends on Facebook and Twitter.


@ScottNover scott.nover@adweek.com Scott Nover is a platforms reporter at Adweek, covering social media companies and their influence.