Twitter made a huge update to its COVID-19 guidance to address "Unverified claims that incite people to action,” including those that “could lead to widespread panic, social unrest, or large-scale disorder."
Twitter Safety announced the platform’s plans to crack down on “unverified claims that incite people to engage in harmful activity, could lead to the destruction or damage of critical 5G infrastructure,” in an April 22 tweet. The Verge observed that this may have been spurred by the fact that “people have set British 5G towers on fire because of conspiracy theories that falsely link the spread of COVID-19 to the rollout of 5G.”
In a second tweet within that thread, Twitter Safety demonstrated that its not taking this commitment to tackling misinformation lightly:
Since introducing our updated policies on March 18, we’ve removed over 2,230 Tweets containing misleading and potentially harmful content. Our automated systems have challenged more than 3.4 million accounts targeting manipulative discussions around COVID-19.
The original post linked to Twitter’s recent development blog update “Coronavirus: Staying safe and informed on Twitter” published April 3. The newly added “Broadening our guidance on unverified claims” section listed examples of dangerous, “unverified claims that incite people to action and cause widespread panic, social unrest or large-scale disorder, such as ‘The National Guard just announced that no more shipments of food will be arriving for two months — run to the grocery store ASAP and buy everything’ or ‘5G causes coronavirus — go destroy the cell towers in your neighborhood!’”
[ads:im:1]
“We’re prioritizing the removal of COVID-19 content when it has a call to action that could potentially cause harm,” a Twitter spokesperson explained to TechCrunch. The spokesperson also noted Twitter’s recent track record:
As we’ve said previously, we will not take enforcement action on every Tweet that contains incomplete or disputed information about COVID-19. Since introducing these new policies on March 18, we’ve removed more than 2,200 Tweets. As we’ve doubled down on tech, our automated systems have challenged more than 3.4 million accounts which were targeting discussions around COVID-19 with spammy or manipulative behaviors.
Protocol’s editor at large David Pierce observed in the April 23 Source Code newsletter that, while “there's still no way to report a tweet and indicate that it's for pandemic-related reasons,” this will be “an interesting test case for social media content moderation” going forward.
“As it broadens its policy,” Pierce wrote in the newsletter, “Twitter's likely to have a harder time enforcing it — and its decisions are likely to be more controversial.”
Twitter had already announced plans to expand “our safety rules to include content that could place people at a higher risk of transmitting COVID-19” back on March 18. The policies“require[d] people to remove Tweets” that “increases the chance that someone contracts or transmits the virus, including:
- Denial of expert guidance
- Encouragement to use fake or ineffective treatments, preventions, and diagnostic techniques
- Misleading content purporting to be from experts or authorities.
[ads:im:2]