New AI fake text generator may be too dangerous to release, say creators | THE GUARDIAN


OpenAI says it’s withholding some of the science behind its text-generating AI model GPT-2. THE GUARDIAN reports the company is concerned that the believability of GPT-2’s text could be used for public deception. Possibilities include fake reviews, fake news, and other forms of false narratives.

Industrialist Elon Musk was an early investor in OpenAI. In the past he has warned about the societal consequences of AI development.

‘The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.’

SEE FULL STORY

New AI fake text generator may be too dangerous to release, say creators
THE GUARDIAN | February 2019 | by Alex Hern

Other reports appear in:

  • FINANCIAL TIMES – ‘Musk-backed AI group delays releasing research over ‘fake news’ fears’ (subscription may be required)
  • WIRED – ‘The AI text generator that’s too dangerous to make public’
  • AXIOS – ‘Fake News by Robots? Axios Uses AI Program to Write ‘Not True’ Story in Experiment’
  • THE VERGE – ‘A step forward in AI text-generation that also spells trouble’

6 thoughts on “New AI fake text generator may be too dangerous to release, say creators | THE GUARDIAN

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.