A person with their hands on a laptop keyboard is looking at something happening over their screen with a worried expression.
Yasmin Dwiputri & Data Hazards Project / Better Images of AI / AI across industries / CC-BY 4.0

The risks and harms arising from fast developments in the field of artificial intelligence (AI) become more evident every day. Victims of algorithmic harms, civil society groups, researchers, and even the AI industry itself, have called for urgent action from regulators to curb algorithmic harms resulting from AI.

Significant progress on the regulatory front has taken place in recent years, spearheaded by the European Union, whose institutions are currently negotiating an advanced draft of a pioneering general AI regulation (the “AI Act”). As the post-Brexit United Kingdom will not automatically be held to European AI laws, the UK government set out its own approach to AI regulation, published in a white paper entitled “A Pro-Innovation Approach to AI Regulation” (hereafter “the White Paper”).

Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics, and Informatics at the Birmingham Law School, and Emma Ahmed-Rengers, PhD candidate at the Birmingham Law School, argue in their article that the White Paper provides an “inadequate basis for sound policy, let alone the foundations of an effective and legitimate regulatory framework that will serve the public interest.”

The UK wants to become a “technology superpower,” through the promotion of technological “innovation.” The White Paper states that through innovation, the AI industry can make the UK the best place in the world. But we know that AI is already harming people in ways which were not anticipated by our current legal frameworks, and the regulatory framework suggested in the White Paper does not create any new legal rights or obligations. We worry that the government’s preoccupation with “innovation” will mean that the level of protection for fundamental rights, the rule of law, and democracy in the UK will fall behind – especially compared to the EU, whose proposed AI Act represents a much firmer commitment to those values.

Emma Ahmed-Rengers- PhD Candidate in Law and Data Science.

The ministerial foreword to the White Paper sets out the Government’s ambition - to make the UK the “smartest, healthiest, safest and happiest place to live and work”.  To achieve that ambition, the Government is relying on AI innovation. 

The White Paper claims that through innovation, the AI industry can make the UK the best place in the world. However, we know that AI is already harming people in ways that were not anticipated by our current legal frameworks, and the regulatory framework suggested in the White Paper does not create any new legal rights or obligations. Yeung and Ahmed-Rengers worry that the government’s preoccupation with “innovation” will mean that the level of protection for fundamental rights, the rule of law, and democracy in the UK will fall behind – especially compared to the EU, whose proposed AI Act represents a much firmer commitment to those values.

Given that no new legal rights or obligations to address harms resulting from AI are anticipated in UK law, Yeung and Ahmed-Rengers doubt that the policy outlined in the White Paper will achieve its explicit goals of ensuring legal certainty and coherence. They are also critical of the way the White Paper discusses fundamental values, such as the rule of law, democracy, human rights, and public trust: “To frame public trust as something that must be achieved for the sake of AI adoption is to undersell the importance of the role of citizens in democratic oversight of AI."

If the UK is truly committed to being a “leader” in the global conversation on AI regulation and being a “champion” of democratic values (and the “smartest, healthiest, safest and happiest place”), we need to abandon empty pro-innovation words and endeavour to ensure that regulatory reform makes a substantive positive impact on human rights, the rule of law, equality, sustainability, and democracy – in short – to human flourishing, rather than as values to be pursued because they support “economic priorities.

Professor Karen Yeung - Interdisciplinary Professorial Fellow in Law, Ethics and Informatics.

The rights and interests of citizens should be at the centre of AI regulation. Regulatory efforts should both protect and empower citizens, particularly through meaningful and effective legal constraints, complemented by rights to information, participation and remedies. Citizens are not just there to be convinced of the good of AI innovation – they are to decide what is in their interest and they must be empowered with legal rights to protect themselves against the harms which AI “innovation” is already creating.