OpenAI has taken down its AI classifier months after it was released due to its inability to accurately determine whether a chunk of text was automatically generated by a large language model or written by a human.

“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy,” the biz said in a short statement added to its January announcement of the online tool. 

“We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated,” the machine-learning lab added.

The classifier was free to use, and netizens could copy and paste into it text to check whether the material was likely generated by a computer or a person. That would be useful for determining whether an email or blog post or essay was crafted by a human. Specifically, it was powered by a large language model that ranked how much of the content was likely generated by software, from “very likely” to “unclear” to “likely.”

OpenAI warned at the time its AI classifier was “not fully reliable” and admitted it was prone to incorrectly flagging human-written text as machine-written. That’s the same OpenAI that says things like its ChatGPT bot should not be relied upon, but champions its use anyway.

The classifier didn’t work very well on writing that had been AI-generated and edited by humans, and it struggled with prose it hadn’t seen in its training dataset. It was also overconfident in its predictions: “the classifier is sometimes extremely confident in a wrong prediction,” OpenAI said. 

OpenAI launched the AI classifier after growing fears about machine content being used by students to write essays and complete homework. At launch, OpenAI urged educators to not take the model’s predictions as gospel, but to use it as a guide complementing “other methods of determining the source of a piece of text.”

Trying to accurately classify AI text is proving difficult. Similar tools built by other developers and companies are also unreliable, and have led to real repercussions on students’ education. An instructor at the University of Texas A&M-Commerce in the United States made headlines when he withheld some grades after ChatGPT predicted their text as being AI-generated. The university has since reinstated the students’ scores. 

Meanwhile, AI software built by Turnitin has been rolled out at schools and universities to tackle plagiarism with “98 percent confidence,” though it’s not clear how accurate it truly is. A study conducted by computer scientists at the University of Maryland suggested the chances of the best available classifiers accurately detecting machine-written text were not much better than a coin toss. 

OpenAI is still working to solve this tricky problem. Last week, the Microsoft-bankrolled lab pledged to develop digital watermarks for AI-generated content as part of its promise to the Biden-Harris administration to help make next-gen machine-learning technology safe to use.

The Register asked OpenAI for further comment and any predicted release date for a new build of the classifier. ®



Source link