‘Ethics is more a habit or muscle, than a code (in either sense). It’s a way of thinking and reasoning, not a rigid framework. It’s nurtured in real life contexts and built up more like case law than a constitution.’

AI ethics need recalibrating to be more useful, says Jeff Mulgan, CEO of NESTA, the UK innovation foundation. He is concerned current initiatives may get in the way of conversations that need to happen.

He lays out five considerations.

  1. Context – Abstractions only work to a point and eventually must focus on actual situations.
  2. Events – Things being done today instead of hypotheticals like ‘the trolley problem.’
  3. Politicization – Future choices unavoidably will mix ethics with politics, e.g. ‘how to handle very unequal access to tools for human enhancement; how to handle algorithmically supported truth and lies; how to handle huge asymmetries of power.’
  4. Being self-critical – Looking beyond homilies to things that can be acted upon.
  5. Being outcome-driven – Focusing less on inputs in favour of beneficial uses of AI, for example in health, education, climate and the economy.

He discusses each point in the article.


AI ethics and the limits of code(s)
NESTA | September 16, 2019 | by Jeff Mulgan