OpenAI, the creator of ChatGPT, has unveiled a cutting-edge tool designed to differentiate between text written by humans and AI. Though achieving 100% accuracy is impossible, OpenAI believes that its new AI Text Classifier has the potential to counter false claims that AI-generated content was written by humans.

The tool is expected to curb malicious activities such as automated misinformation campaigns, academic fraud through AI, and the impersonation of humans through chatbots. Tests on English texts showed that the tool accurately identified AI-generated text 26% of the time, with a 9% false positive rate for human-written text.

OpenAI notes that the tool performs better with longer text, thus a minimum of 1,000 characters is needed for a reliable test.

OpenAI Text Classifier is not without limitations. It can incorrectly categorize both AI-generated and human-written text. AI text may also be able to escape the classifier with slight modifications.

The performance of the tool may suffer with text written by children and with text in languages other than English, as it was primarily trained on adult English content. Despite these limitations, it’s worth examining how the classifier performs.

OpenAI cautions that the AI Text Classifier tool has yet to undergo extensive examination for its ability to distinguish between text generated by AI and that written by humans. While the tool may prove useful in identifying likely AI-generated content, it should not be relied upon as the sole determinant for making a determination.