Twitter is turning to its users for help crafting its dehumanization policy, making a survey available via this blog post where people can share their feedback until Tuesday, Oct. 9, at 6 a.m. PST/9 a.m. ET.
The policy currently reads, “You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.
Gadde and Harvey defined dehumanization as “Language that treats others as less than human. Dehumanization can occur when others are denied of human qualities (animalistic dehumanization) or when others are denied of human nature (mechanistic dehumanization). Examples can include comparing groups to animals and viruses (animalistic), or reducing groups to their genitalia (mechanistic).”
And they defined identifiable group as follows: “Any group of people that can be distinguished by their shared characteristics such as their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, serious disease, occupation, political beliefs, location or social practices.”
Questions asked by Twitter in its survey are:
- On a scale of one to five, how would you rate the clarity of the dehumanization policy provided, where one is “not at all clear” and five is “extremely clear?”
- How can the dehumanization policy be improved, if at all?
- Are there examples of speech that contributes to a healthy conversation, but may violate this policy? If so, please provide examples.
- Is there any additional feedback you’d like to provide about the policy?
- What is your age?
- What is your gender?
- In what country are you currently located?
- What is your Twitter username (optional)?
- If Twitter has additional questions about your response, would you be willing to be contacted via email?
Gadde and Harvey wrote the following introduction to the survey: “For the past three months, we have been developing a new policy to address dehumanizing language on Twitter. Language that makes someone less than human can have repercussions off the service, including normalizing serious violence. Some of this content falls within our hateful conduct policy (which prohibits the promotion of violence against or direct attacks or threats against other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease), but there are still Tweets many people consider to be abusive, even when they do not break our rules. Better addressing this gap is part of our work to serve a healthy conversation.”
They continued, “We want your feedback to ensure we consider global perspectives and how this policy may impact different communities and cultures. For languages not represented here, our policy team is working closely with local non-governmental organizations and policy makers to ensure their perspectives are captured.”