Professor Karen Yeung, Birmingham Law School
Professor Karen Yeung is a member and co-rapporteur of the EU Expert Group on Artificial Intelligence,

The University of Birmingham’s Professor Karen Yeung is a member and co-rapporteur of the EU Expert Group on Artificial Intelligence, which has just released its Ethics Guidelines for Trustworthy AI in Europe.  These Ethics Guidelines will form the basis for the next steps which the European Commission has announced it will now take for building trust in artificial intelligence (AI).

Karen Yeung, an Interdisciplinary Professorial Fellow in Law, Ethics and Informatics is one of the 52 experts who were appointed in June 2018 to provide expertise on the Commission on what would be the best ethical guidelines for Artificial Intelligence (AI) development which were announced at the 2019 Digital Day conference in Brussels.

AI has the potential to benefit a wide-range of sectors, such as healthcare, energy consumption, cars safety, farming, climate change and financial risk management and  can also help to detect fraud and cybersecurity threats, and could enable law enforcement authorities to fight crime more efficiently. Despite these benefits, the emergence of AI applications has also been accompanied by rising public anxiety about the dangers and threats associated with these technologies, generating concerns about the future of work, and raising challenging legal and ethical questions. 

The Commission’s announcement is part of the wider AI strategy, which aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust.

Professor Karen Yeung said: “I strongly support the values and objectives of the European Commission’s initiative, and welcome the release of the Ethics Guidelines, particularly its explicit grounding on Europe’s foundational commitments to human rights, democracy and the rule of law.  But it is important to recognise that these voluntary guidelines can only  be a first step towards ensuring that the technological infrastructure and applications that we are building and deploying ensure due respect for these fundamental values.  

While ethical reasoning and voluntary ethics codes have a valuable and important role to play, the role of law is vital.  In particular, international human rights law offer us a broad and well-defined set of universally recognisable principles that have the capacity to reach across all circumstances in which our dignity and integrity are threatened.  Whereas the violation of an ethical principle is often written off as a matter of regret because ethics lack any institutionalised system for enforcement, international human rights law possesses well-developed standards and institutions as well as a universal framework of safeguards which command respect and adherence.

Although the Ethics Guidelines introduce and define the concept of ‘Trustworthy AI’ as AI that complies with the law, respects ethical principles, and is based on robust socio-technical systems, they do not examine ‘lawful AI’, which is the first of these three critical components.  The second deliverable for the Group is expected to address this gap, but given the extent and strength of industry resistance to legal constraints and which is reflected in the composition of the Group, it remains to be seen whether recommendations concerning ‘lawful AI’ which it produces will be capable of delivery on the promised vision.”

Vice-President for the Digital Single Market, Andrus Ansip said: “I welcome the work undertaken by our independent experts. The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”

The EU Commissioner for Digital Economy and Society, Mariya Gabriel added: “Today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI."

The Guidelines rest on four ethical principles, grounded in human rights, which are presented as imperatives for those developing and deploying AI systems.  These are:

  1. Respect for human autonomy
  2. Prevention of harm
  3. Fairness
  4. Explicability

These principles can draw force from their explicit grounding in human rights. Asking high-level ethical questions based on these four principles can be a way for those designing and deploying AI systems to reflect upon the potential ethical and human rights implications of their work. In this way, ethics and human rights can reinforce one another.

The European Commission’s AI strategy consists of a three-step approach: setting-out the key requirements for trustworthy AI, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric AI.    

Four ethical principles and seven essential requirements for achieving Trustworthy AI.

On the basis of the four ethical principles, the Guidelines develop seven requirements for achieving ethical AI.  The Guidelines also set out an  assessment list which is intended to help verify the application of each of the 7 key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

Large-scale pilot with partners

In summer 2019, the Commission will be launching a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts. In addition, members of the AI high-level expert group will help present and explain the guidelines to relevant stakeholders in Member States.

Building international consensus for human-centric AI

The Commission also wants to bring this approach to AI ethics to the global stage because technologies, data and algorithms know no borders. To this end, the Commission will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations.

Karen Yeung has been actively involved in several technology policy and related initiatives in the UK and worldwide, including those concerned with the governance of AI, one of her key research interests.  As well as being a member of the EU’s High Level Expert Group on Artificial Intelligence (since June 2018) she is also a Member and Rapporteur for the Council of Europe’s Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT).

As a Rapporteur for the Council of Europe, Karen Yeung's research for the University of Birmingham identifies a number of threats and other adverse consequences that may occur as a result of AI technologies. In particular, she has found that artificial intelligence not only threatens various human rights, including the rights to freedom of expression and information, to privacy and data protection, but may threaten the very moral and democratic foundations or ‘infrastructure’ upon which human rights are anchored.

In her research, Karen Yeung has also highlighted the power imbalance between those who develop and employ AI technologies, and those who interact with and are subject to them. Her research finds that if society wants to take human rights seriously, we must ensure that with advanced digital technologies and systems, comes responsibility for those who develop and implement them.

Karen Yeung took up the post of Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham in the Birmingham Law School and the School of Computer Science in January 2018. She has also been a Distinguished Visiting Fellow at Melbourne Law School since 2016.

For more information and interview requests, please contact: For more information or interviews, please contact: Hasan Salim Patel, Communications Manager (Arts, Law and Social Sciences) on +44 (0) 121 415 8134 or contact the press office out of hours on +44 (0) 7789 921 165.

             Communication: “Building trust in human-centric artificial intelligence”

             AI ethics guidelines

             Factsheet artificial intelligence

             High-Level Expert Group on AI