‘Bias can creep in at many stages of the deep-learning process, and the standard practices in computer science aren’t designed to detect it.’

MIT TECHNOLOGY REVIEW provides a helpful primer on how bias enters AI systems. The possible sources they cite:

  • Framing – driven by business objectives more than fairness.
  • Collecting – underlying data is incomplete/not representative, or reflects the outcome of previous experience/prejudice,
  • Preparing – attributes chosen for analysis can reflect an inherent bias
  • Unknown unknowns – difficulties of identifying bias retroactively
  • Imperfect processes – original procedures did not include bias detection/awareness
  • Lack of social context – data collected in one place may not be relevant in another, and different communities may have different interpretations of values
  • Definitions of fairness – no agreement on what constitutes ‘fairness’ nor how it can be represented mathematically


This is how AI bias really happens—and why it’s so hard to fix
MIT TECHNOLOGY REVIEW | February 4, 2019 | by Karen Ho