Achieving Inclusive & Sustainable Artificial Intelligence

Solutions discussed:

Regulate to ensure the safe and ethical creation and usage of AI

Countries could use the recommendations and frameworks provided by UNESCO to develop their policies. 'Recommendation on the Ethics of Artificial Intelligence, 2021'.

Presented by: Mariagrazia Squicciarini, Chief of Executive Office, Social and Human Sciences Sector, UNESCO, France

Ethics of Artificial Intelligence

Education of the public on the useful applications, and the potential dangers, of AI through schools and public information campaigns

This includes proper education for the educators. Most contemporary teachers will not be as accustomed to using technology and navigating the digital space as their younger students may be. They must be appropriately trained to educate students on technology and AI.

Presented by: Professor John Shawe-Taylor, Director, International Research Centre on Artifical Intelligence

Ethics of Artificial Intelligence

Provide training on the ethical and safe use of AI for all students and employees involved in creating AI systems

AI systems must be developed with human rights and safety as a core principle from the very beginning. This principle should be taught to everyone involved in technology, from early STEM education to the employees working directly on AI systems.

Presented by: Mariagrazia Squicciarini, Chief of Executive Office, Social and Human Sciences Sector, UNESCO, France

Ethics of Artificial Intelligence

Companies should be held accountable for showing how they are making their AI systems human-centric and ethical by design

AI should be designed to improve human life. AI should act as an assistant to human activities, not as a human itself. For every AI system, the people using it should have an understanding of how it works and be aware of potential biases. The decision-making process of AI and algorithms should be transparent, so that inaccuracies in reasoning can be identified. The conclusions and statements made by AI systems should not be assumed to be the absolute truth and should be considered critically within the wider cultural and social context by humans.

Presented by: Professor John Shawe-Taylor, Director, International Research Centre on Artifical Intelligence

Humane AI

Creation of policies regulating AI must involve cross-party and all-party discussions

There needs to be scrutiny and deliberation of AI policies by representatives of all parties, to ensure that the best policies can be created. For example, the Institute of AI in the UK prioritises leading cross-party discussions.

Presented by: Emma Wright, Director, Institute of AI, UK

Forum on Emerging Technologies

International collaboration between countries should be used to share best practice on how to use AI for human and societal benefit

The IRCAI has recently released a report showcasing 100 successful AI projects from around the world, which can be used by governments and the private sector to create their own plans for using AI effectively.

Presented by: Professor John Shawe-Taylor, Director, International Research Centre on Artifical Intelligence

IRCAI GLOBAL TOP 100

Policies relating to AI must involve input from the public

For exampling, the emerging Technology Charter for London outlines the need for Londoners to have the ability to ask questions about and scrutinise AI policies and laws that affect them and their personal data. This transparency between tech companies, local government and the public is crucial for limiting misuse of AI and building trust within the community.

Presented by: Theo Blackwell, Chief Digital Officer for London, Greater London Authority

Emerging Technology Charter 

Discussions and debates on the ethics of AI should be diverse in geographical region and language to reach underrepresented groups

Most discussions on the safe and ethical use of AI are happening in Western liberal democracies. This is excluding people who don't speak English, less economically developed countries in the Global South or countries which are not democracies, like China. However, excluding large parts of the global population who are digital users is counterproductive to creating effective international and national regulations relating to AI.

Presented by: Emma Wright, Director, Institute of AI, UK

All AI system developers must recognise and account for potential biases in the data used by the AI

AI systems have been known to be inherently biased against women, because they are using data that is biased against women. Companies must ensure that AI doesn't make the existing gender equality gap even wider. Part of the solution is having gender equality in the teams developing the technology, and gender equality in the management and leadership positions.

Presented by: Mariagrazia Squicciarini, Chief of Executive Office, Social and Human Sciences Sector, UNESCO, France

UNESCO Recommendations

 

Create policies and systems to ensure that the individual has full ownership of their data, rather than tech companies

Individuals should have the right to know who has access to their personal data, and the ability to erase their data.

Presented by: Mariagrazia Squicciarini, Chief of Executive Office, Social and Human Sciences Sector, UNESCO, France

UNESCO Recommendations

AI policies should include measures to manage its carbon and ecological footprint

One of the 4 measures outlined in the Technology Charter for London concerns sustainability of AI systems. This includes energy efficient design of the physical components of the AI, and whether they can be repaired and upcycled within a circular economy.

Presented by: Theo Blackwell, Chief Digital Officer for London, Greater London Authority

Emerging Technology Charter 

UNESCO Recommendations