This is how AI bias really happens—and why it’s so hard to fix | MIT TECHNOLOGY REVIEW


‘Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it.’

MIT TECHNOLOGY REVIEW provides a helpful primer on the ways bias enters AI systems.

POSSIBLE SOURCES OF BIAS

  • Framing – can be driven by business objectives more than fairness
  • Collecting – underlying data is incomplete/not representative, or reflects the outcome of previous experience/prejudice
  • Preparing – attributes chosen for analysis can reflect inherent bias
  • Unknown unknowns – difficulties of identifying bias retroactively
  • Imperfect processes – original procedures did not include bias detection/awareness
  • Lack of social context – data collected in one place may not be relevant in another, different communities may have different interpretations of values
  • Definitions of fairness – no agreement on what constitutes ‘fairness’ nor how it can be represented mathematically

SEE FULL STORY

This is how AI bias really happens—and why it’s so hard to fix
MIT TECHNOLOGY REVIEW | February 4, 2019 | by Karen Ho

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.