Brand Safety Concerns Come to Twitter as Ads Run on Profiles Selling Illegal Drugs

More than 20 unnamed brands were affected

The company has announced plans to reconsider its service following controversies over misleading or intentionally false content. - Credit by Getty Images

The brand safety issues that plagued YouTube and Facebook in recent years have now made their way to Twitter.

The 4A’s Advertiser Protection Bureau (APB), formed in April as an industrywide effort to address such issues, was alerted to an incident last week that saw sponsored tweets running on Twitter profiles created to promote the illegal sale of narcotics like Oxycodone. In some cases, paid tweets also appeared under search results for terms or hashtags related to such drugs.

Twitter has been placing ads within individual profiles since 2015.

One marketer with knowledge of the situation said that over 20 brands were affected, although the person declined to name them. He added that the drugs referenced by the hashtags included “everything you could imagine.”

"The notion is, if you see something, say something."
Louis Jones, evp, media and data, 4A's

A Twitter spokeswoman confirmed that the incident occurred but said it was quickly fixed once the company was informed.

“We recently determined that ads were being served on profiles that were selling restricted products, amounting to 450 impressions and $1.34 in spend,” the representative said in a statement. “Once we identified the issue, we immediately suspended the accounts in question and updated our systems. As we observe new behaviors attempting to get around the safeguards we have in place, we will continue to refine our tools to make Twitter a safe place for advertisers.”

“They should not be running ads in search results for illegal drugs,” the anonymous marketer said, adding that sponsored tweets appeared on “multiple” profiles that were “clearly [created to] make illegal drug sales.”

Adweek could not locate any of the offending placements this week. A quick search found ads from home-improvement retailer Lowe’s and a children’s hospital under misspellings of Oxycodone, but they were not adjacent to “unsafe” profiles or links promoting the sale of such substances.



Louis Jones, executive vice president of the 4A’s media and data practice, told Adweek that an employee at GroupM was the first to be alerted. That member shared the information across the Bureau so each agency could take steps to determine whether its own clients might be affected. “The notion is, if you see something, say something,” Jones said.

At least one marketer reported briefly pausing their company’s ad spend after the alert first went out. It is unclear whether any GroupM clients were affected, and no sources identified the entity that notified the employee. Jones described it as “a monitoring service.”

“By the time most people saw it and dug into it, Twitter had already resolved the issue,” Jones said. “The longer story here is the APB is trying to figure out what are the processes to put in place so we can stay on top of it and help prevent these things from happening in the future. … Our objective is to get brands out of unsafe places.”

"As we observe new behaviors attempting to get around the safeguards we have in place, we will continue to refine our tools to make Twitter a safe place for advertisers."
Twitter spokesperson

This incident comes as Twitter moves to balance free speech issues with the concerns of advertisers and everyday users. The platform recently followed Apple, YouTube and Facebook in censoring far-right conspiracy theorist and distributor of misinformation Alex Jones, but some activists have launched pressure campaigns targeting advertisers and pushing Twitter to ban Jones altogether. Earlier this week, CEO Jack Dorsey told The Washington Post that his company is considering additional steps to help users “make judgments for themselves.”

“All scaled platforms generally think macro but occasionally need to act micro,” said Marc Goldberg, CEO of ad tech company Trust Metrics, adding that the Jones controversy had “created a dialogue for platforms to think beyond just the algorithm. There will always be those who specifically work to reverse-engineer algorithms and find loopholes, so there’s an importance for human review and intervention.”

While the drug incident may not have been as widespread or offensive as ads running over ISIS recruitment videos on YouTube, a media agency employee and APB member told Adweek her company expected the same degree of responsibility from Twitter that clients demanded from Google.

I would expect it to be within Twitter’s control to manage [this content] appropriately and to be able to determine how the promoted tweets are appearing in those spaces,” she said.

This media agency employee, who specializes in data analytics, said the platform will “probably not” be able to make this sort of challenge go away entirely, but added, “I like to know that they are trying in that way.”

The anonymous marketer, who reached out to Twitter on behalf of clients upon learning of the incident last week, claimed the platform’s team was “dismissive, not wanting to make it a big issue.” The person also noted that other major platforms, such as YouTube, had notified brands when similar matters arose.

“Like a lot of problems on Twitter, it could be fixed had [someone] been addressing the issues on the platform,” the marketer said. “I’m not convinced Twitter cares about this issue.”


@kitten_mouse lindsay.rittenhouse@adweek.com Lindsay Rittenhouse is a staff writer at Adweek, where she specializes in covering the world of agencies and their clients.
@PatrickCoffee patrick.coffee@adweek.com Patrick Coffee is a senior editor for Adweek.
Publish date: August 17, 2018 https://dev.adweek.com/agencies/brand-safety-concerns-hit-twitter-as-sponsored-tweets-appear-on-profiles-selling-illegal-drugs/ © 2020 Adweek, LLC. - All Rights Reserved and NOT FOR REPRINT