ABSTRACT
This paper introduces the idea of ‘algorithmic exclusion’ as a source of the persistence of inequality. Algorithmic exclusion refers to outcomes where people are excluded from algorithmic processing, meaning that the algorithm cannot make a prediction about them. This occurs because the conditions that lead to societal inequality can also lead to bad or missing data that renders algorithms unable to make successful predictions. This paper argues that algorithmic exclusion is widespread and that its consequences are significant.
When data is the diet for algorithmic prediction, accuracy depends on more than the size of the training set. Several papers examine the difficulties of algorithmic bias, often for pre-existing bias in the source material. Author Catherine Tucker argues for attention in the opposite direction – not to the data in the models, but to those that are left out.
Catherine Tucker is the Sloan Distinguished Professor of Management Science at MIT School of Management. The paper is published by the Brookings Center on Regulation and Markets.
“. . . we need to stop focusing just on outputs and process, and should also consider missing inputs and missing outputs in algorithmic policy.“
Catherine Tucker
Algorithmic Exclusion: The Fragility of Algorithms to Sparse and Missing Data
BROOKINGS CENTER ON REGULATION AND MARKETS | February 2, 2023 | by Catherine Tucker