“The idea is not to teach the AI the right answer to every question — an impossible feat. The goal would be to teach it to identify patterns in the petabytes of internet content that are associated with accuracy and inaccuracy.“– Ben Smith
Could AI systems fact-check themselves in real-time? The question goes to the heart of uncertainty about AI-generated writing with current large language models (LLMs).
False results are sometimes called “hallucinations” and are otherwise regarded as fabricated content not to be trusted.
Microsoft’s coming version of Bing incorporates at least some elements to indicate the truthfulness of the information it generates.
Semaphor observes how NewsGuard shows up in some responses from the new Bing, powered by a generative AI model. NewsGuard is a journalistic app that rates news sources for their credibility based on nine criteria.
Writer Ben Smith comments on the still-elusive objective of AI systems that assess the truthfulness of their copy faster than humans.