‘Ethics is more a habit or muscle, than a code (in either sense). It’s a way of thinking and reasoning, not a rigid framework. It’s nurtured in real life contexts and built up more like case law than a constitution.’
AI ethics need recalibrating to be more useful, says Jeff Mulgan, CEO of NESTA, the UK innovation foundation. He is concerned current initiatives may get in the way of conversations that need to happen.
He lays out five considerations.
- Context – Abstractions only work to a point and eventually must focus on actual situations.
- Events – Things being done today instead of hypotheticals like ‘the trolley problem.’
- Politicization – Future choices unavoidably will mix ethics with politics, e.g. ‘how to handle very unequal access to tools for human enhancement; how to handle algorithmically supported truth and lies; how to handle huge asymmetries of power.’
- Being self-critical – Looking beyond homilies to things that can be acted upon.
- Being outcome-driven – Focusing less on inputs in favour of beneficial uses of AI, for example in health, education, climate and the economy.
He discusses each point in the article.
AI ethics and the limits of code(s)
NESTA | September 16, 2019 | by Jeff Mulgan