Explained: The Controversy Surrounding the UK’s Online Child Safety Bill and Rishi Sunak’s U-Turn

The UK’s Prime Minister, Rishi Sunak, has been forced to back down on the Online Safety Bill. The bill, which aimed to keep websites and internet-based services free of illegal and harmful material while defending the freedom of expression, faced a major backbench rebellion led by 50 MPs who pressurized for an amendment that would toughen up punishments for social media bosses.

The amendment proposed that technology chiefs would be held criminally liable and could face jail time for failing to block minors from seeing damaging content. This proposed change was accepted by Culture Secretary Michelle Donelan after talks with the rebels over the weekend. The government had been facing defeat as Labour also supported the move.

As a result, the Prime Minister has now promised to introduce similar proposals. This marks the third time that Sunak has backed down in the face of rebellious backbenchers since taking power in October, following concessions on the issues of housing targets for councils and restrictions on onshore wind farms.

What is Online Safety Bill UK?

The Online Safety Bill is a proposed legislation that aims to make the internet a safer place for users in the United Kingdom. The bill is designed to keep internet users safe from fraudulent and other potentially harmful content and prevent children, in particular, from accessing damaging material. It does this by enacting requirements on how social media platforms and other online platforms assess and delete illegal material and content that they deem to be injurious. The bill applies to search engines and internet services that host user-generated content, such as social media platforms, online forums, some online games, and sites that publish or display pornographic content.

The bill proposes to impose a regulatory framework on these intermediary platforms, requiring them to take responsibility for user-generated content and ensure they are taking steps to guarantee their systems and processes offer “adequate protection of citizens from harm presented by content”. This includes, for example, companies actively looking for illegal content and removing it as soon as it appears, rather than waiting for someone to report it and then acting, and being more transparent over internal policies on content moderation.

The UK communications regulator, Ofcom, will be appointed as the regulator for the Online Safety regime and will be given a range of powers to gather the information it needs to support its oversight and enforcement activity. The bill will also require companies to provide a right of appeal when posts are deleted, and not to remove or restrict legal content, or suspend or ban a user unless the circumstances for doing this are clearly set out in their terms.

Subscribe to the International Relations Updates by The Kootneeti

* indicates required

The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of The Kootneeti Team

Facebook Comments

The Kootneeti Team

This report has been written by The Kootneeti Team. For any feedbacks/query reach Editor@thekootneeti.com || Twitter: @TheKootneeti

You may also like...