Over the past few months and years, Facebook has made it a point to try to protect its users from potentially offensive content. One of the ways it did that was by warning users on Instagram when they're about to post a comment that might be considered offensive, a feature that began rolling out in the summer of this year.
Now, that same warning is expanding to new feed posts, and it'll work just like before. When Instagram's AI algorithm detects that a post might contain signs of bullying, the app will let the user know that similar comments have been reported for that reason. Users will be given the option to go back and reconsider the words they're using before posting.
Instagram says that it's seen positive results from implementing the feature for Instagram comments, and that this change could reduce only bullying and help educate users about what's allowed on the platform. The feature is rolling out to users in select countries at first, but will become more broadly available over time.
In addition to protecting users from bullying, Instagram has also been making strides in protecting the safety and privacy of its younger audience, by preventing users under the age of 13 from creating an account. In the future, it will also recommend certain privacy settings based on the user's age.
32 Comments - Add comment