AI ethics and the limits of code(s) | NESTA

‘Ethics is more a habit or muscle, than a code (in either sense). It’s a way of thinking and reasoning, not a rigid framework. It’s nurtured in real life contexts and built up more like case law than a constitution.’

AI ethics need recalibrating in order to be more useful, says Jeff Mulgan, CEO of NESTA, the UK innovation foundation. He advocates greater fluency and is concerned current initiatives may get in the way of the conversations that need to happen.

Mulgan puts forward five areas for rethinking:

  1. Ethics involve context and interpretation, not just deduction from codes – abstractions only work to a point and eventually must focus on real situations.
  2. It needs to attend to live dilemmas – a focus on things being done today instead of hypotheticals like ‘the trolley problem.’
  3. Ethics is often unavoidably political – future choices will likely need to combine ethics and politics, e.g. ‘how to handle very unequal access to tools for human enhancement; how to handle algorithmically supported truth and lies; how to handle huge asymmetries of power.’
  4. The field needs to be more self-critical – looking beyond homilies to things that can be acted upon.
  5. Ethics needs to connect to outcomes – focusing less on inputs, in favour of beneficial uses of AI, for example in health, education, climate and the economy.

Each point is amplified in his article.


AI ethics and the limits of code(s)
NESTA | September 16, 2019 | by Jeff Mulgan