VOX tests the latest version of GPT-2, the controversial text-generating algorithm, and now you can too. A Toronto machine language engineer has put together a simple website so anyone can give try the GPT-2 ‘writing’ algorithm using its most recently updated model.
GPT-2 caused a stir when it was first unveiled in February.
- Its developer, OpenAI, said they were withholding the underlying science for fear it could be misused by spreading false information.
- Reactions were mixed. Some thought it was suitably cautious while others called it a publicity stunt.
- The latest ‘medium sized’ model was released in early May.
The VOX piece shows a couple of passages of poetry generated by GPT-2.
How you can try GPT-2
To use GPT-2, you start with an incomplete sentence or phrase and the algorithm takes it from there. That’s literally how it works: unlike other text generating systems, GPT-2 uses the beginning few words as a prompt, setting the subject matter and style. From there it figures out what word comes next — word after word, rapidly — to form a coherent text. Except none of it’s real.
The try-for-yourself site is unaffiliated with OpenAI and was developed by machine learning engineer Adam King. Here’s the link to enter your own prompt.
Our trial
The first time I tried it was more prosaic (below). My prompt is in bold type, the GPT-2 text in italics…
journalismAi.com tracks advances in artificial intelligence and what they mean for journalism. It also discredits the notion that AI will lead to dereference, abuse, suppression, censorship, and bias in news coverage.
But to many of us, AI’s impact on news is the source of all our worries. AI has the potential to make our lives easier, but without some form of human oversight, the future of journalism will be clouded in uncertainty. We are living in an age when AI can, if we allow it to, make journalism more effective in ways we can’t even imagine today. AI technology has the potential to make news reporting a reality, but for us now, the threat won’t materialize on the same level that it did in the past, and the risks it poses are so remote we probably shouldn’t even give it much thought right now.
The biggest concern is the possibility that algorithms will become even less effective than human reporters over a long time.
When we ask a question, our language changes to respond to the context we are asked about.
Other times, when I tried with the same prompt, GPT-2 produced a different result every time. None of the words came from this website and that is the GPT-2 difference. The model is not scraping and re-manipulating material, nor drawing data in its raw form from a structured source. It appears to be generating new words ‘on-the-fly.’
Our take
- GPT-2 is impressive but unconvincing. Then again ‘convincing’ isn’t the point, it’s to show what’s possible at this stage of development.
- The more impressive results are from objective measures of progress, both of which show the progress made by the GPT-2 model. One scores ambiguity. Computers have difficulty with sentences like ‘The trophy doesn’t fit in the brown suitcase because it’s too big.’ Humans almost always understand. A paper by OpenAI says before GPT-2, the computer state-of-the-art was interpreting this correctly 63.7% of the time. GPT-2 scores 70.7%. A second test is about short-term recall of context within a passage, for example when words like ‘it’ or ‘she’ are used from a few paragraphs earlier. Previous measures showed computers with 56.25% accuracy. GPT-2 scores 63.24%.
- A glass-half-empty interpretation is there’s a long way to go to match human understanding. Half-fullers will see both results as meaningful advances.
- Either way, GPT-2 and is worth tracking as indicators of where we are with machines achieving creative writing.
A poetry-writing AI has just been unveiled. It’s … pretty good.
VOX | May 15, 2019 | by Kelsey Piper