Seventy-four sets of ethical guidelines have been developed for AI since 2016. It makes a total of 84 ethics documents currently in circulation. Sources split fairly evenly between government and private industry, each contributing about 22% of the total. The others are from NGOs, academics, and associations.
The analysis is by the Health Ethics and Policy Lab at ETH Zurich. Their paper is published in the September issue of Nature Machine Intelligence.
The authors present the 11 themes in order of how frequently they appear. They find there is ‘a global convergence’ around the first five.
THE THEMES
- Transparency, e.g. ‘explainability, interpretability or other acts of communication and disclosure’
- Justice and fairness, e.g. ‘prevention, monitoring or mitigation of unwanted bias and discrimination’
- Non-maleficence, e.g. ‘safety and security or that AI should never cause foreseeable or unintentional harm’
- Responsibility and accountability, e.g. ‘acting with “integrity” and clarifying the attribution of responsibility and legal liability, if possible upfront‘
- Privacy, e.g. ‘both as a value to uphold and as a right to be protected… frequently presented in relation to data protection and data security.’
- Beneficence, e.g. ‘augmentation of human senses, the promotion of human well-being and flourishing, peace and happiness, the creation of socio-economic opportunities, and economic prosperity.’
- Freedom and autonomy, ‘freedom of expression, or informational self-determination and “privacy-protecting user controls”; others generally promote freedom, empowerment or autonomy.’
- Trust, e.g. ‘trustworthy AI research and technology, trustworthy AI developers and organizations, trustworthy “design principles”, or customers’ trust’
- Sustainability, e.g. ‘protecting the environment, improving the planet’s ecosystem and biodiversity, contributing to fairer and more equal societies, and promoting peace’
- Dignity, e.g. ‘intertwined with human rights or otherwise means avoiding harm, forced acceptance, automated classification and unknown human–AI interaction’
- Solidarity, e.g. ‘in relation to the implications of AI for the labour market.’
OUR TAKE
- With so many guidelines in circulation it’s easy for principles to blur together. The rigorous analysis applied by the authors consolidates the field with authority.
- The study usefully doubles as an index as each source is linked as mentioned.
‘Whereas several sources, predominantly from the private sector, highlight the importance of fostering trust in AI through educational and awareness-raising activities, others contend that trust in AI risks diminishing scrutiny and may undermine certain societal obligations of AI producers’
ABSTRACT
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethi-cal principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
AUTHORS
- Anna Jobin
- Marcello Ienca
- Effy Vayena
SEE FULL PAPER Publication (pay)