UK's media regulator will force social media firms to remove harmful content

The UK’s media regulator, Ofcom, is to be granted new powers by the government which will allow it to force social media firms to act on harmful content. Until now, social media companies, such as Twitter and Facebook, have regulated themselves. Ofcom will police content depicting violence, cyber-bullying, terrorism, and child abuse. Ofcom favourite weapon to go after firms is financial penalties so it’s unlikely websites will be blocked for slow or non-compliance.

The expansion of Ofcom’s powers is the first response the government has issued since the Online Harms consultation was carried out last year. The new rules will apply to platforms hosting user-generated content such as Facebook, Instagram, Snapchat, Twitter, YouTube, and TikTok. While the government will set the direction of the policy, Ofcom will have the freedom to draw up and adapt the details.

The NSPCC, a children’s charity in the UK, welcomed the news. It"s chief executive Peter Wanless said:

“Too many times social media companies have said: "We don"t like the idea of children being abused on our sites, we"ll do something, leave it to us.” Thirteen self-regulatory attempts to keep children safe online have failed. Statutory regulation is essential.”

According to the BBC, the government will officially announce the new powers later today. It’s not clear yet when the new rules will come into effect nor exactly which firms will have to abide by these rules.

Source: BBC News

Report a problem with article
Next Article

Google appeals a decision by the European Commission to fine it over shopping comparisons

Previous Article

Flipboard launches an ad-free video service costing $2.99 a month