Twitter tests warning users when they write potentially offensive replies

Online abuse and cyberbullying have been the targets of many measures taken by social networks in recent years, as companies try to promote a healthier environment on their platforms. Twitter has taken some measures of its own in the past, giving users the ability to hide replies to their tweets, in addition to automatically demoting those that are less likely to contribute to a positive conversation.

Today, the Twitter support team revealed that it"s testing a new feature with a limited number of users on iOS, which aims to fight the use of harmful language on the platform. When the user is writing a reply that contains potentially harmful language, Twitter will show a prompt asking them to reconsider their reply before posting it.

When things get heated, you may say things you don"t mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.

— Twitter Support (@TwitterSupport) May 5, 2020

Twitter isn"t the first social network to take this kind of approach, as Facebook"s Instagram started doing something very similar last year. According to Instagram, its approach has shown positive results, so it makes sense for other social networks to follow suit. On Instagram, the warning is also now shown for original posts, in addition to replies, but that doesn"t seem to be Twitter"s implementation, at least for now.

Naturally, it remains to be seen if the experiment is welcomed on Twitter, and if it is, how long it will take to be more widely available. Previous features in this vein have been fairly successful on the platform, though, so it"s likely it will expand over time.

Report a problem with article
Next Article

Slack overhauls Android app with new UI and navigation, available to beta users

Previous Article

Major Microsoft To Do update for all platforms brings smart lists and more