Seth Lewis and his co-authors examine culpability when possible wrongs result from news stories generated by algorithms.
They conduct their analysis using US libel law and assess how fault may be determined when algorithms choose what goes in a story. They also point out that the defences for libel available to newsrooms may not be the same as those for Google and others who use algorithms in news delivery.
- Seth Lewis, School of Journalism and Communication, University of Oregon.
- Amy Kristin Sanders, Northwestern University in Qatar, Doha, Qatar.
- Casey Carmody, University of Minnesota, Minneapolis, USA.
‘The rise of automated journalism—the algorithmically driven conversion of structured data into news stories—presents a range of potentialities and pitfalls for news organizations. Chief among the potential legal hazards is one issue that has yet to be explored in journalism studies: the possibility that algorithms could produce libelous news content. Although the scenario may seem far-fetched, a review of legal cases involving algorithms and libel suggests that news organizations must seriously consider legal liability as they develop and deploy newswriting bots. Drawing on the American libel law framework, we outline two key issues to consider: (a) the complicated matter of determining fault in a case of algorithm-based libel, and (b) the inability of news organizations to adopt defenses similar to those used by Google and other providers of algorithmic content. These concerns are discussed in light of broader trends of automation and artificial intelligence in the media and information environment.‘