Algorithms are playing an increasingly important role in many areas of public policy, from forecasting elections to predicting criminal recidivism. In most of these cases, the algorithms are viewed as additional tools for use by judges, analysts and policy-makers – a form of hybrid decision-making. But such hybridization relies on the level of trust people have in these algorithms, both by the policy-maker and the public. This paper reports the results of a series of experiments on individual trust in algorithms for forecasting political events and criminal recidivism. We find that people are quite trusting in algorithms relative to other sources of advice, even with minimal information about the algorithm or when they are explicitly told that humans are just as good at the task. Using a conjoint experiment, we evaluate the factors that influence people’s preferences for these algorithms, finding that several of the factors of common concerns for scholars are of little concern for the public.

There is an ‘urgent need’ for more public awareness of algorithms, reports this research team after conducting six studies of public attitudes. Algorithms for political forecasting were studied, as were others predicting recurrences of criminal offenses.

Among their findings:

  • People tend to trust algorithms relative to other sources.
  • Public concerns are less than scholarly concerns, in several areas.

‘We find a stunning degree of trust in algorithms relative to other sources across all of the experiments.’


  • Ryan Kennedy – University of Houston – Department of Political Science
  • Philip Waggoner – University of Chicago
  • Matthew Ward – University of Houston

SEE FULL PAPER From repository (provides link for free download)

Kennedy, Ryan and Waggoner, Philip and Ward, Matthew (2018), ‘Trust in Public Policy Algorithms’