AI text generator GPT-2 is now fully available. The complete code and associated data was released by OpenAI, the California AI lab that created the model. The algorithm extrapolates text from a prompt phrase or sentence.
You can try out the full strength version on an independent website, TalkToTransformer.com.
- OpenAI made headlines for GPT-2 in February when its initial release withheld much of the underlying data. The company said the model was ‘too dangerous’ to make available in its entirety. Subsequent updates provided progressively larger pieces.
- Concern centred on bad actors using GPT-2 to generate convincing deceitful narratives, for example for news fakery or flooding websites with adverse reviews.
- Early reviews of GPT-2 storytelling are mixed. Some commentators say the text is convincing. Other focus on how coherence deteriorates as more words are generated.
A study at Cornell University gives the full model a credibility score of 6.91 out of a possible 10 points. OpenAI says its decision to release the complete version was influenced by this score not changing significantly from its previous release.
- The company has noticed ‘no strong evidence of misuse so far,’ but ‘humans find GPT-2 outputs convincing.’
- Still, they warn ‘GPT-2 can be fine-tuned for misuse‘ and the text it creates can be ‘challenging’ to detect.
- OpenAI has created its own detection model to combat malicious uses of their model. ZDNET and THE VERGE report the OpenAI counter-model has reached about 95% effectiveness. OpenAI continues to encourage increased human vigilance regarding possible deceptions.
The original decision to phase release of GPT-2 split opinions in the AI research community. Nine months elapsed between initial and full release.
- Some argued it was a publicity stunt, counter to conventional practices of releasing source materials to assist research efforts by others.
- Others believed it was responsible practice given the potential for abuse and its consequences.
- Either way, the interval provided OpenAI a head-start to study possible uses of the full model.
SEE RELATED STORIES
- OpenAI has published the text-generating AI it said was too dangerous to share
THE VERGE | November 7, 2019 | by James Vincent |‘It’s tricky to convey exactly how good GPT-2’s output is, but the model frequently produces eerily cogent writing that can often give the appearance of intelligence (though that’s not to say what GPT-2 is doing involves anything we’d recognize as cognition).’
- This text-generation algorithm is supposedly so good it’s frightening. Judge for yourself.
NEIMAN LAB | November 7, 2019 | by Joshua Benton | ‘GPT-2 is not coming to take the jobs of journalists, as some have worried. Paid reporting jobs generally require a certain level of factuality that the algorithm can’t match.’
- Turns Out Elon Musk-Backed OpenAI’s Text Generator Is More Funny Than Dangerous, For Now
GIZMODO | November 7, 2019 | ‘While the GPT-2 model does generate comprehensible text in a reasonably correct tone and style, and it’s easy to imagine situations in which it could be misused… This is real “your mileage may vary” territory.’
- This news article about the full public release of OpenAI’s ‘dangerous’ GPT-2 model was part written by GPT-2
THE REGISTER | November 6, 2019 | by Katyanna Quach | ‘Occasionally, it spits out sentences that are surprisingly good, but as it keeps churning out text, it becomes incoherent.’
- OpenAI’s ‘dangerous’ AI text generator is out: People find GPT-2’s words ‘convincing’
ZDNET | November 6, 2019 | by Liam Tung | ‘According to OpenAI, humans find output from the 1.5-billion parameter GPT-2 model “convincing”, but only marginally more so than the 774-million model it released in August.
- GPT-2: 1.5B Release | OPEN AI BLOG | November 5, 2019 | ‘Our experience with GPT-2 over the past 9 months has given us valuable insight into the challenges and opportunities for creating responsible publication norms in AI.’