Twitter announced today a new policy that it claims will offer more transparency around which hateful tweets on its platform have been subject to enforcement action. Typically, when tweets violate Twitter’s policies, one of the actions the company can take is to limit the reach of those tweets — or something it calls “visibility filtering.” In these scenarios, the tweets remain online but become less discoverable as they’re excluded from areas like search results, trends, recommended notifications, For You and Following timelines, and more.
Instead, if users want to see the tweet they have to visit the author’s profile directly.
The tweet may also be downranked in replies when such enforcement takes place and ads won’t run against the content, Twitter’s guidelines state.
Historically, the wider public would not necessarily know if a tweet had been moderated in this way. Now, Twitter says that will change.
The company plans to “soon” begin adding visible labels on tweets that have been identified as potentially violating its policies, which has impacted their visibility. It did not say when exactly the system would be fully rolled out across its network.
In addition, not all tweets that have had their visibility reduced will be labeled, the company noted.
It’s starting only with tweets that violate its Hateful Conduct policy and says it will expand the feature to other policy areas in the “coming months.”
“This change is designed to result in enforcement actions that are more proportional and transparent for everyone on our platform,” a blog post authored by “Twitter Safety” stated. The post additionally touted Twitter’s enforcement philosophy, calling it “Freedom of Speech, not Freedom of Reach.”
If a tweet is labeled, the user themselves won’t be shadowbanned or removed from the network — the company notes the policy actions will occur at “a tweet level only and will not affect a user’s account.”
Twitter also explains that users whose tweets were labeled will be able to submit feedback if they think their tweet was incorrectly flagged, but says they may not get a response to that feedback nor will it guarantee the tweet’s reach will be restored.
Likely, this has to do with the vast cuts Twitter made to its Trust & Safety teams and the company as a whole. And it may be relying heavily on automation to make its decisions over labeling, though it’s unclear to what extent this system will be automated. (Twitter no longer replies to press inquiries, so blog posts and tweets made by the company or its new owner, Elon Musk, are the only official word on things like this). Automation, of course, could mean Twitter will get things wrong — something it admits in a Twitter thread about the changes. Here, the company also says it plans to allow authors to appeal its decision at some point “in the future.”
Again, no hard deadline or a ballpark timeframe was provided.
We will continue to remove illegal content and suspend bad actors from our platform. We’re committed to increasing transparency around our moderation actions, and we’ll continue to share updates on our progress. You can learn more about our various enforcement actions here:…
— Twitter Safety (@TwitterSafety) April 17, 2023
The launch of the new policy follows Twitter’s earlier decisions under Musk to allow controversial figures, including Trump and neo-Nazis to rejoin the network. In one incident, Musk brought the artist formerly known as Kanye West back to Twitter, who then tweeted a swastika and was resuspended.
The new policy announced today may be one that reflects Twitter’s attempt to balance two opposing forces. On the one hand, Musk is a free-speech proponent who railed against Twitter’s allegedly less-than-transparent moderation policies in the years before he took control of the company. He even went so far as to publicly share internal documents and communications, aka the Twitter Files, in an attempt to expose how Twitter’s moderation decisions had been made in the past.
The results weren’t as astounding as he hoped. What was largely found was a company having to make complex and nuanced decisions, often in real-time, around borderline content and high-profile figures.
Visibility filtering was one of the topics the Twitter Files had covered, in fact.
Musk aimed to show that Twitter had previously been politically biased in its past filtering of tweets, but the report didn’t include any information about how many accounts or tweets were de-amplified or the politics of those who were impacted, so no conclusions could be made.
Then, on the other side of things, Twitter advertisers have been fleeing the network since Musk’s takeover, and all the brand safety measures haven’t been able to restore their trust. The company may hope that labeling tweets that have been de-ranked will help marketers feel more comfortable that their ads aren’t running directly alongside hate speech. But advertisers have plenty of other reasons to be concerned over Twitter.
Since Musk’s acquisition, the network has been chaotic, with constantly changing policies and features, including a now pay-for-reach version of Twitter Blue and, over the past few days, changes to how news outlets are labeled, leading to generally reliable newsrooms like PBS, NPR, CBC, and others to leave the platform entirely.