Is the new online safety Bill built to fail?
The UK's proposed legal response to protect vulnerable internet users may not be enough to shield children and adults from illegal content
The UK's proposed legal response to protect vulnerable internet users may not be enough to shield children and adults from illegal content
Whether child or adult, a potentially harrowing and damaging online journey can await the unwary internet user - illegal content such as child sexual abuse images, revenge pornography, threats to kill, selling firearms and terrorist material are just some of the bear traps that await online.
Like legislators around the world, the UK Government is under ever-increasing pressure to find a way to sanitise our online environment. Its response to the threat is the Draft Online Safety Bill – a piece of proposed legislation that has critics on both sides.
Technology companies say that criminal sanctions threaten investment into the British economy, whilst mental health charities, such as the Samaritans claim that removing the duty of care on ‘legal but harmful content’ for adults means that lives will continue to be at risk. However, the fact that criminal liability for endangering child safety online is set to become law shows that MPs recognise the dangers faced by young internet users.
There are lots of controversies surrounding this Bill, but one of its most controversial aspects lies in its treatment of freedom of expression and the power it could, potentially, give to intermediaries.
After last summer’s political turmoil, during which the Bill was paused and rumoured to be scrapped entirely, it has returned to Parliament with significant changes expected to follow its new passage through both Houses.
Controversial ‘legal but harmful’ provisions will be scrapped, but platforms and other intermediaries will be required to introduce a system allowing users more control to filter out harmful content they do not want to see.
There are lots of controversies surrounding this Bill, but one of its most controversial aspects lies in its treatment of freedom of expression and the power it could, potentially, give to intermediaries.
Clauses 12 and 23 set out a general duty applicable to user-to-user – that’s to say an internet service enabling user-generated content, such as Facebook or Twitter – and search services, such as Google. This duty calls on them to ‘have regard to the importance of’: (i) ‘protecting users’ right to freedom of expression’ and (ii) ‘protecting users from unwarranted infringements of privacy’. In addition, clause 13 provides ‘duties to protect content of democratic importance’ and clause 14 prescribes ‘duties to protect journalistic content’.
However, unlike clauses 12 and 23, the duties in clauses 13 and 14 only apply to ‘Category 1 services’ – currently undefined user-to-user services. The fact that the core free speech duties following clauses 12, 13 and 14 of the Bill, only require social media platforms and relevant tech companies. to ‘have regard to’ or, in the case of clauses 13 and 14, ‘take into account’, free speech rights or the protection of democratic or journalistic content, means that these companies may simply pay lip service to these ‘softer’ duties when a conflict arises with the legislation’s numerous and ‘harder-edged’ ‘safety duties’.
This distinction between harder and softer duties gives online platforms, such as Facebook and Twitter, a statutory footing to produce boilerplate policies that say they have ‘had regard’ to free speech or privacy, or ‘taken into account’ the protection of democratic or journalistic content. So long as they can point to a small number of decisions where moderators have had regard to, or taken these duties into account, they will be able to demonstrate their compliance with the legislation to Ofcom. It will be extremely difficult, or even impossible, to interrogate the process.
Furthermore, the requirement that clause 12 imposes on platforms to merely ‘have regard to the importance’ of ‘protecting users’ right to freedom of expression within the law’ does not go far enough to ensure the Bill complies with rulings on freedom of expression from the European Court of Human Rights. By making platforms responsible for the content on their sites, the Bill requires them to act as our online social conscience, thereby making them de facto gatekeepers to the online world.
Although ‘privatised censorship’ has taken place on platforms such as Facebook and Twitter since their creation, the Bill gives them, and others, a statutory basis for subjectively evaluating and censoring content. This, along with the potential conflict between the harder and softer duties, could lead to platforms adopting an over-cautious approach to monitoring content by removing anything that may be illegal, and that would bring them within the scope of the duty and regulatory sanctions. This could lead to legitimate content being removed because it is incorrectly thought to be illegal. Let’s not forget that, rather than human moderation, most platforms will be deploying algorithms and AI for this task. And, cynically, it may provide platforms with an opportunity to remove content that does not conform with their ideological values.
What the Bill will look like when it eventually comes into force – and it certainly seems as though it is now a question of when rather than if – remains to be seen. Much of the legalistic detail is uncertain and undefined and will be subject to secondary legislation post-enactment. There are not many certainties when it comes to this Bill, but its relationship with freedom of speech will certainly be a source of argument and debate rumbling on for some time to come. We watch this space with bated breath!