Donate
Text Audio
00:00 00:00
Font Size

Twitter has announced upcoming policy changes to ostensibly protect users from content that could “threaten someone’s physical safety” or “lead to offline harm.”

Twitter Safety announced on its official account Oct. 21, that the company is “working on a new policy to address synthetic and manipulated media on Twitter” but want to hear back from their users before changes are finalized.

Twitter Safety described the terms of “synthetic and manipulated media” as “media that’s been significantly altered or created in a way that changes the original meaning/purpose, or makes it seem like certain events took place that didn’t actually happen.”

Twitter announced three reasons for doing this:

“1 We need to consider how synthetic media is shared on Twitter in potentially damaging contexts.

“2 We want to listen and consider your perspectives in our policy development process.

“3 We want to be transparent about our approach and values.”

Twitter ended the announcement, stating that “[i]n the coming weeks,” it will announce
“a feedback period” so users can help aid in refining their policy “before it goes live.”

Twitter has a complicated history of proposing ostensibly sound rules that lead to the slippery slope of censorship or outright liberal hypocrisy.

In August 2018, Twitter’s Vice President of Trust and Safety Del Harvey wrote in a company blog post titled “The Twitter Rules: a living document” that “[w]hile we welcome everyone to express themselves on our service, we prohibit targeted behavior that harasses, threatens, or uses fear to silence the voices of others.”

[ads:im:1]

Harvey had also written a letter to employees which cited how the company has been reforming itself to deal with offensive speech:

“Trust & Safety has been working with our partners in User Research, Twitter Services, and Product over recent months to evaluate how we can do more to help customers feel safe as it relates to hate speech, driven by a principle of minimizing real-world harm. Our initial proposal for implementing this, which we were slated to resent to staff at the end of this month, focuses on addressing dehumanizing speech.”

Her definition of “dehumanizing speech” was “speech that treats or describes others as less than human.”

One does not have to think too hard to ascertain the potentially problematic nature of her definition of "dehumanizing speech." For example, would that mean that comparing people in unappealing ways fits her description?

[ads:im:2]