OpenAI shuts down a tool for recognizing text written by AI due to low accuracy
OpenAI’s AI classifier has stopped working since July 20, the company reports in a blog post.
“We are studying feedback and researching more effective methods of text classification,” OpenAI said.
OpenAI admitted that the classifier did not work very efficiently from the very beginning and often gave false answers, marking the text as written by a human as if it were a generative AI. The company thinks that more data for training will be able to fix this problem in the future.
At the same time, ChatGPT developer will also focus on mechanisms that will allow classifying audio or visual content as created by AI or humans.
OpenAI’s viral chatbot made its debut in the fall of 2022, boosting the AI industry and bringing modern technology closer to the masses. Several sectors raised alarms about the texts and images generated by AI, with teachers worried that students would no longer study and simply let ChatGPT write their homework for them. Later, New York schools even banned the use of chatbots.
Another problem is the ability of such chatbots to present false information as fact (Google’s Bard chatbot, for example, has already done so in a commercial). Governments have not yet figured out how to control generative AI and have only resorted to some restrictions on its use, and the development companies themselves seem to be at a loss as to how to tame this technology and make it work properly.