Ever since the launch of ChatGPT, there has been a big shift towards content safety. The self-censorship of ChatGPT and other chatbots isn’t really a thing when it comes to search engines but now Ofcom is highlighting one of the dangers - that of easy access to self-harm content.
In a new study for Ofcom by the Network Contagion Research Institute (NCRI), the major search engines, including Google, Bing, DuckDuckGo, Yahoo!, and AOL, can act as a gateway to content that glorifies or celebrates self-injury - this includes web pages, images, and videos.
As part of the study, the NCRI entered common self-injury queries and cryptic terms used by online communities to hide the real meaning. Based on 37,000 search results, it was found that harmful self-injury content is prevalent, image searches are particularly risky, and cryptic search terms reveal more harmful content.
Commenting on the findings, Almudena Lara, Online Safety Policy Development Director at Ofcom, said:
“Search engines are often the starting point for peope’s online experience, and we’re concerned they can act as one-click gateways to seriously harmful self-injury content.
Search services need to understand their potential risks and the effectiveness of their protection measures – particularly for keeping children safe online - ahead of our wide-ranging consultation due in Spring.”
One major caveat in this study is that safe search and image blurring settings were not used by the researchers. If parents have proper filters in place, things like safe search should be on by default without an option to turn them off - offering some protection for kids.
Ofcom is the enforcer of the Online Safety Act which was recently passed in the UK. It will tell search engines to take steps to reduce the chances that children will find harmful content like self-harm promotion, suicide, and eating disorders. A consultation is due in the spring.
Source: Ofcom