While it is impossible to reliably detect all AI-written text, we believe good classifiers can inform mitigations for false claims that AI-generated text was written by a human: for example, running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human.

– OpenAI

ChatGPT has a new companion model that tells you when it’s the writer you’re reading. The model has been developed by OpenAI, the company behind ChatGPT and the GPT line of large language models (LLMs).

They say it also works on other AI text generators.

“Our classifier is not fully reliable,” they say prominently, listing nine known limitations. OpenAI suggests their model not be used as the final authority.

The classifier correctly identifies AI-written text 26% of the time and misidentifies human text as AI text in 9% of instances. They say accuracy goes up when assessing longer passages.

OpenAI says they are releasing the model, shortcomings and all, to determine if some help is better than none.

A link to try the new tool is embedded in the announcement.

Updated Feb 1 at 10:27 ET

SEE ANNOUNCEMENT

New AI classifier for indicating AI-written text
OPEN AI | January 31, 2023

LATEST