OpenAI takes tool offline that was supposed to recognize AI-generated texts
OpenAI has quietly taken offline the tool that allows text written by AI to be recognized as such. The ChatGPT creator justifies this by saying that the accuracy of the tool was insufficient.
OpenAI continues to say on the AI Classifier page that it is ‘processing feedback’ and ‘researching more effective text checking methods’. Finally, it underlines that it has “made a commitment to develop and release methods that enable users to tell whether audio or visual content has been generated by AI.”
Practice showed at the time of its release that the tool was often wrong. In a challenge set of texts on which the model was not trained, the tool said 30 percent of texts written by humans were ‘maybe’ or ‘probably’ generated by an AI. He became even less accurate with languages other than English and with children’s texts.
Ars Technica’s AI Editor signs up that too research shows that such tools do not work properly. The website has also written a background story about it.