Seventy-four sets of ethical guidelines for AI have been developed since 2016, making a total of 84 ethics documents currently in circulation. Predominant sources split fairly evenly between private industry and governments, each contributing about 22% of the total, with others from NGOs, academics, and associations.
The analysis comes in a study by the Health Ethics and Policy Lab at ETH Zurich. Their paper is published in the September issue of Nature Machine Intelligence.
The authors lay out the 11 themes in order of how frequently they appear and suggest there is ‘a global convergence’ around the first five.
- Transparency, e.g. ‘explainability, interpretability or other acts of communication and disclosure’
- Justice and fairness, e.g. ‘prevention, monitoring or mitigation of unwanted bias and discrimination’
- Non-maleficence, e.g. ‘safety and security or that AI should never cause foreseeable or unintentional harm’
- Responsibility and accountability, e.g. ‘acting with “integrity” and clarifying the attribution of responsibility and legal liability, if possible upfront‘
- Privacy, e.g. ‘both as a value to uphold and as a right to be protected… frequently presented in relation to data protection and data security.’
- Beneficence, e.g. ‘augmentation of human senses, the promotion of human well-being and flourishing, peace and happiness, the creation of socio-economic opportunities, and economic prosperity.’
- Freedom and autonomy, ‘freedom of expression, or informational self-determination and “privacy-protecting user controls”; others generally promote freedom, empowerment or autonomy.’
- Trust, e.g. ‘trustworthy AI research and technology, trustworthy AI developers and organizations, trustworthy “design principles”, or customers’ trust’
- Sustainability, e.g. ‘protecting the environment, improving the planet’s ecosystem and biodiversity, contributing to fairer and more equal societies, and promoting peace’
- Dignity, e.g. ‘intertwined with human rights or otherwise means avoiding harm, forced acceptance, automated classification and unknown human–AI interaction’
- Solidarity, e.g. ‘in relation to the implications of AI for the labour market.’
- This is a valuable paper. With so many guidelines in circulation it’s easy for principles cited in each to blur together. The rigorous analysis applied by the authors usefully consolidates the field.
- The study doubles as a table of contents for the source documents as each is linked when mentioned.
- The study makes clear there is a siloed approach to developing ethics for AI. Each organization is pursuing initiatives according to their work and lens.
‘Whereas several sources, predominantly from the private sector, highlight the importance of fostering trust in AI through educational and awareness-raising activities, others contend that trust in AI risks diminishing scrutiny and may undermine certain societal obligations of AI producers’
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethi-cal principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
- Anna Jobin
- Marcello Ienca
- Effy Vayena
SEE FULL PAPER Publication (pay)