Will the draft Online Safety Bill deliver its promises?
The government says its draft Online Safety Bill (the Bill) which was published earlier this month will make the UK ‘the safest place in the world to be online while defending free expression’. True? Or will the Bill fall short of the government’s claims?
There has been a longstanding and growing demand for greater child online safety, particularly around access to pornography. For example, although age-verification checks on commercial pornography sites were a feature of the Digital Economy Act 2017 (the Act) which was passed by parliament, the government controversially decided not to implement that part of the Act. The right to challenge this decision was subsequently won by our public law litigation team and the controversy is ongoing. In the wake of this, the government published an Online Harms White Paper, which, with a bit of positive rebranding, has formed the basis of the draft Online Safety Bill.
At the same time, calls for regulation of internet use have multiplied. Pressure on the government has been increasing from those concerned about the availability of material relating to terrorism and child sexual abuse, as well as the online proliferation of racist abuse, stalking, romance and investment scams, the continued concern of the safety of children, the spread of ‘disinformation’ and the manipulation of electoral processes.
In attempting to address these harms, the Bill is ambitious in its remit. However, critics doubt whether it will be practically workable and may have a stifling effect on free speech and democratic processes.
What it covers: a few key points
- The Bill introduces duties of care on ‘regulated services’ which are user-to-user services and search services having links to the UK. These duties will be wide-reaching, affecting not just big social media companies but all companies which have content generated or uploaded by one user which may be encountered by another user.
- The ‘duties of care’ primarily relate to online safety. These are duties such as taking proportionate steps to mitigate and effectively manage the risks of harm to individuals by monitoring and removing illegal and harmful content. The extension to ‘harmful’ as well as ‘illegal’ content is notable and seems likely to impose significantly more onerous responsibilities on providers of regulated services. Reconciling a cautious approach in determining what amounts to ‘harmful’ content with the need to avoid acting as a form of censor will be a difficult balance to strike.
- There are additional duties if services are likely to be accessed by children and other additional duties for ‘Category 1 services’ to have regard to protecting rights to freedom of expression, privacy, journalistic content and content with democratic importance. Category 1 services will be set out on a register by OFCOM and will be services with a higher number of users and functionality.
- OFCOM will be taking on a significant new role as regulator. They will have the power to implement fines of up to £18 million or 10% of annual turnover as well as to enforce criminal sanctions against senior managers of companies which find themselves in breach of their duties under the Bill. It may be that in due course decisions by OFCOM are taken into account by other public bodies such as the Information Commissioner’s Office and the Children’s Commissioner in overlapping areas.
What it doesn’t cover
There are some notable exceptions to ‘regulated content’ within the Bill such as email, text messaging and comments and reviews on provider content.
Content on news publishers’ websites does not fall within the scope of the Bill but user-to-user services will have a duty to consider the importance of journalism when undertaking content moderation, have a fast-track appeals process for journalists’ removed content and will be held to account by OFCOM for the arbitrary removal of journalistic content.
Commercial pornography websites without user-generated content will not be subject to age-verification checks to the frustration of child-safety campaigning groups. Similarly, fraud via advertising, emails or cloned websites will not fall within the scope of the Bill if they are not user-generated.
Will it achieve what it sets out to?
Many of the new duties and legal tests in the Bill appear difficult to apply in practice. For example, content is harmful to children:
‘if the provider of the service has reasonable grounds to believe that the nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a child of ordinary sensibilities.’
There is also concern that that the free speech duties are not defined clearly enough, with regulated services merely having to ‘take into account’ or ‘have regard’ to these.
The Bill could place a heavy burden of content moderation on smaller companies which fall within its scope creating a risk of oppressive liability or meaning service providers take an overly cautious approach. That, in turn, could allow the bigger Category 1 services to define what is, and what is not, acceptable content.
Overall, the reach of the Bill is ambitious and addresses areas which are controversial, emotive and polarising. Some of the provisions, though, are difficult and may prove to be unduly onerous or unworkable in practice. As the Bill travels through parliament BDB Pitmans will be monitoring progress and it will be interesting to see what amendments are suggested to this proposed regulation of internet use.