Protecting the public from ‘lawless AI’

The rapid adoption of AI by state actors is outpacing legal safeguards, raising urgent concerns about governance to protect against human rights violations.

A bustling street with a large crowd of people wearing winter clothes.

Decision-making AI tools such as facial recognition technology often fail to comply with data protection laws. Image by Sebastian Mellen.

Article by Adam Green, freelance journalist

The rising use of AI by governments

Life-shaping decisions affecting citizens, from immigration arrests to welfare eligibility, are increasingly guided by algorithmic systems operating without effective legal safeguards. As a result, individual liberties, rights and freedoms are being trounced with alarming frequency.

Professor Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham, argues that government actors are routinely using powerful AI tools like facial recognition technology (FRT) without proper regard for laws that bind the exercise of state power. Some even claim compliance with legal safeguards while essentially ignoring them, a practice Yeung dubs ‘rule of law gaslighting’.

Image of a man walking past a police vane with facial recognition technology enabled.

AI systems frequently scan faces without consent. Image by Richard Baker.

For instance, a privacy campaign group found that the police force in Essex, UK, made misleading claims about the rigour of its FRT equality impact assessments, relying on largely irrelevant test results, invoking tests results concerning ‘false positives’ (i.e. automated facial match found to be mistaken) based on a different technology system to the one it was using. “AI decision support tools are being used without adequate attention to mistaken outputs and their consequences, and without effective legal guardrails, including failure to adhere to basic legal and constitutional principles that serve as fundamental constraints on the abuse of power by governments, including power exercised through the use of AI ,” notes Yeung.

This reckless use of AI by state actors is more salient now than ever. The Immigration and Customs Enforcement (ICE) agency is vastly expanding its activities across the US, including using FRT to scan faces without consent, sparking civil unrest. UK Immigration Enforcement is using live FRT at ports to generate alerts for people of interest on ‘wanted’ lists, building on the country’s near-decade track record of experiments ranging from live event monitoring to the creation of FRT watchlists. Since 2023, the technology has been used in thousands of arrests. Yet independent studies have repeatedly found that many FRT systems exhibit racial bias, raising serious concerns about their fairness and reliability.

Case Study: When Live Facial Recognition Crossed the Line

In 2017, the streets of South Wales became an unwitting testbed for experimental policing technology. Over a two-year period, South Wales Police conducted more than 60 live facial recognition (LFR) trials, scanning an estimated 500,000 people’s faces as they walked through shopping centres, attended sporting events, or simply went about their day. No notices warned the public. No consent was sought.

For one local resident, the experience became more than a passing inconvenience. Flagged as a “potential match” while attending a public event, he was stopped by officers, questioned, and briefly detained only to be told later that the system had simply misidentified him. The error was never publicly recorded. He had no opportunity to challenge the match or understand how his biometric data had been processed.

The courts eventually weighed in. In a landmark ruling, the UK Court of Appeal found the deployment unlawful: the automated scanning of every passer-by constituted a violation of the right to privacy, and the police had failed to demonstrate that the system did not discriminate against women and minority ethnic groups. Their equality duty had not been met.

Meanwhile in London, the Metropolitan Police were running similar trials at major public events including the Notting Hill Carnival, Remembrance Day ceremonies, and busy shopping districts such as Stratford Westfield. Independent academic evaluations by the University of Essex and the University of Cambridge found systemic issues: weak transparency, inadequate privacy safeguards, and deployments that failed to meet even minimum ethical and legal standards. These analyses signalled possible violations of privacy, data protection, non-discrimination, and even access to information rights.

What unites these cases is not only their legal significance but the quiet normalisation of mass biometric surveillance. Ordinary people, going about ordinary activities, found themselves transformed into data points - scanned, logged, flagged - by systems later shown to be inaccurate, biased, and procedurally flawed.

They reveal a core truth at the heart of emerging AI governance: once deployed, high-risk AI systems reshape public life long before democratic safeguards catch up.

Find out more about these and other cases in Professor Yeung’s recent research paper.

Turning “motherhood and apple pie” principles into effective safeguards

Given the flurry of laws, bills and safety summits, governments and regulatory agencies are aware of how AI use can harm the public. But the challenge, according to Yeung, is turning vague ‘motherhood and apple pie’ principles like fairness and transparency into concrete, operational and legally enforceable safeguards. 

Challenges arise when seeking to translate high-level principles into operational standards that companies adhere to. For instance, the European Union’s AI Act laid out essential safety requirements for so-called ‘high-risk’ AI applications, but the task of operationalising these was delegated to European standards bodies, which are independent of governments.

Once those bodies produce technical standards which are then adjudged to give effect to the Act by the European Commission, they achieve ‘harmonised’ European standard status. So, firms that voluntarily comply with those standards are presumed to comply with the law. “But this does not mean there has been actual compliance with the law - it simply gives them the legal authority to place their product on the European market,” says Yeung.  

How harmonised European AI standards gain legal force

Harmonised standards play a crucial but often misunderstood role in how the EU AI Act is applied in practice, shaping when and how high-risk AI systems are treated as legally compliant. The process typically unfolds as follows:

1) Technical drafting (JTC/Working Groups)

Expert committees (often industry-heavy) draft technical standards intended to operationalise the essential requirements for 'high-risk' AI applications.

2) Consensus and national review

Standards text is negotiated and reviewed by national standards bodies for alignment and feasibility.

3) Commission review and approval

Once the European Commission endorses the agreed standards, they are given Harmonised European Standard (HES) status.

4) Legal presumption

Under the EU AI Act, providers who comply with HES obtain a presumption of legal compliance with relevant requirements.

5) Market access (without proof of safety)

The presumption enables market placement of 'high-risk' systems; however, a presumption is not a guarantee that the system is in fact safe or rights-compliant.

The process used to forge these standards is unequal and flawed, according to Yeung, who participated (on behalf of European equality bodies) in the CEN-CENELEC JTC 21 technical committee, which is in the process of translating the AI Act’s essential requirements into technical standards. “Technical standard-setting is dominated by industrial interests, because participation in technical standards drafting [in Europe] is done through expert voluntary participation,” she observes. “And the main organisations that can afford to send their experts are the big tech companies.” Civil society organisations such as trade unions and small business associations can participate but often lack voting rights.

A blue Metropolitan Police street sign.

Police forces and state actors are frequently turning to facial recognition technology. Image by Tadas Petrokas.

Human rights by design

Regulatory instruments could be legally mandated to address many of the dangers AI systems pose to democracy, human rights and the rule of law. One priority is ethically, legally and epistemically responsible real-world testing of high-stakes AI systems before they can be deployed, a neglected issue even among human rights lawyers and civil society groups, who have tended to focus more on paper-based impact assessments rather than emphasising the need for evidence of their fitness for purpose and effective guardrails to prevent harm.

As Yeung argues in relation to facial recognition systems, “We need a set of practicable methods and guardrails around these things before we allow them to be deployed at scale on an unsuspecting public who don't know what is going on.”

In contrast to “the Trumpian deregulatory agenda" that celebrates ‘unleashing AI’ without regard to its adverse impacts on the lives of people, communities and democracy, such an approach requires specific regulations to ensure accountability, operationalised within a human rights framework. Article 9 of the EU AI Act, which requires providers of ‘high risk’ AI systems to establish a risk management system which ensures that an ‘acceptable’ level of risk to health, safety and human rights is maintained throughout the AI lifecycle, Professor Yeung notes that even the most well-known algorithmic impact assessment methods do not borrow directly from safety engineering best practice, suggesting they will fall short of the requirements of Article 9.

We need a set of practicable methods and guardrails around these things before we allow them to be deployed at scale on an unsuspecting public who don't know what is going on.

Professor Karen Yeung
Interdisciplinary Professorial Fellow in Law, Ethics and Informatics

Operationalising legal safeguards requires balanced interest-group participation in standards to avoid industry capture and help secure democratic legitimacy. It also requires multi-disciplinary collaboration that bridges the gap between technical and human rights expertise.

These are some of the questions to which Professor Yeung has dedicated her academic career. Her appointments to multiple councils and advisory boards, including most recently as Special Advisor to the UK Parliament’s Joint Committee on Human Rights to assist its inquiry into human rights and the regulation of AI, reflect the urgent need for expertise to bridge the gap between technical systems and human rights. That knowledge will be critical to ensure AI serves the public good rather than undermining the foundations of rights-respecting democratic societies.