‘The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.’

OpenAI says it’s withholding some of the science behind its text-generating AI model GPT-2. The Guardian reports the company is concerned that the believability of GPT-2’s text could be used for public deception.

Possibilities include fake reviews, fake news, and other false narratives.

Industrialist Elon Musk was an early investor in OpenAI. In the past, he has warned about the societal consequences of AI development.


New AI fake text generator may be too dangerous to release, say creators
THE GUARDIAN | February 2019 | by Alex Hern


  • FINANCIAL TIMES – ‘Musk-backed AI group delays releasing research over ‘fake news’ fears’ (subscription may be required)
  • WIRED – ‘The AI text generator that’s too dangerous to make public’
  • AXIOS – ‘Fake News by Robots? Axios Uses AI Program to Write ‘Not True’ Story in Experiment’
  • THE VERGE – ‘A step forward in AI text-generation that also spells trouble’