ABSTRACT

Machines are increasingly aiding or replacing humans in journalistic work, primarily in news distribution. We examined whether news recommendation engines contribute to filter bubbles and fragmented news audiences by asking a diverse set of real-world participants (N = 168), using their personal Google accounts, to search Google News for news about Hillary Clinton and Donald Trump during the 2016 U.S. presidential campaign and report the first five stories they were recommended on each candidate. Users with different political leanings from different states were recommended very similar news, challenging the assumption that algorithms necessarily encourage echo chambers. Yet we also found a very high degree of homogeneity and concentration in the news recommendations. On average, the most recommended five news organizations comprised 69% of all recommendations. Five news organizations alone accounted for 49% of the total number of recommendations collected. Out of 14 organizations that dominated recommendations across the different searches, only three were born digital, indicating that the news agenda constructed on Google News replicates traditional industry structures more than disrupts them. We use these findings to explore the challenges of studying machine behavior in news from a normative perspective, given the lack of agreed-upon normative standards for humans as news gatekeepers. This article suggests that because there is no one agreed-upon standard for humans as news gatekeepers, assessing the performance of machines in that role is doubly complicated.’

Efrat Nechushtai and Seth Lewis burst filter bubbles in this paper about choices by news algorithms. They researched automated story selection by Google News in the 2016 US Presidential election.

They found liberals and conservatives were shown the same stories 99.9% of the time.

Their findings challenge the idea that algorithms skew results based on inferred preferences or beliefs of the reader, creating so-called filter bubbles. Put differently, this study suggests that if media are reinforcing pre-existing ‘left’ or ‘right’ beliefs, the same accounts are being shown to each group.

The study also examined diversity of sources. Almost half (49%) of the stories shown to both groups were from five news organizations.

They observe that algorithm selection choices are hard to compare because there are no objective measures when human editors make story selections.

AUTHORS

  • Efrat Nechushtai is at Columbia University Graduate School of Journalism.
  • Seth Lewis is at the School of Journalism and Communication, University of Oregon.

SEE FULL PAPER From publisher [free]

Nechushtai, E., & Lewis, S.C. (2019) ‘What kind of news gatekeepers do we want machines to be? Filter bubbles, fragmentation, and the normative dimensions of algorithmic recommendations’ Computers in Human Behavior, 90, 298-307.
DOI: https://doi.org/10.1016/j.chb.2018.07.043

LATEST