OpenAI says it’s withholding some of the science behind its text-generating AI model GPT-2. THE GUARDIAN reports the company is concerned that the believability of GPT-2’s text could be used for public deception. Possibilities include fake reviews, fake news, and other forms of false narratives.
‘The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.’
Other reports appear in:
- FINANCIAL TIMES – ‘Musk-backed AI group delays releasing research over ‘fake news’ fears’ (subscription may be required)
- WIRED – ‘The AI text generator that’s too dangerous to make public’
- AXIOS – ‘Fake News by Robots? Axios Uses AI Program to Write ‘Not True’ Story in Experiment’
- THE VERGE – ‘A step forward in AI text-generation that also spells trouble’