Some more complex AI models like neural networks can be notorious for the “black box” ways in which they work: it’s hard to decipher how they reach the conclusions they do based on vast datasets. That’s why experts are now advocating for explainability or, in other words, the ability to understand what the logic is behind the decision of an AI model.

– Lakshmi Sivadas & Sabrina Argoub

Polis, the journalism think-tank at the London School of Economics, pulls together prominent themes in their ongoing examination of journalism and AI systems. Supported with files from previous posts, they point to opportunities for smaller newsrooms, augmenting human work instead of replacing it with machines, accountability for machine-driven writing, and the need for more diversity.


AI is in the news. 7 thoughts to reflect on what that means
POLIS | February 14, 2023 | by Lakshmi Sivadas & Sabrina Argoub