The use of large language models (LLMs) is skyrocketing, and with good reason; it"s really good. Over the last two weeks, ChatGPT has become my favorite tool. At work, I asked it how to build an obscure piece of Linux software against a modern kernel, and it told me how. It even generated code blocks with the bash commands needed to complete the task. I also asked it to do all sorts of silly things. For instance, it generated a fictional resume for Hulk Hogan where he has no previous IT experience but wants to transition into a role as an Azure Cloud Engineer. It did that, too, and it was hilarious. In fact, it"s so good that it can generate articulate and convincing papers for your college coursework. Because of this, there is now a need for systems that detect machine-generated text.
Recently, a team of researchers at Stanford proposed a new method called DetectGPT, which aims to be among the first tools to combat generated text in higher education. The method is based around the idea that text generated by LLMs typically hover around specific regions of the negative curvature regions of the model"s log probability function. Through this insight, the team developed a new barometer for judging if text is machine-generated which doesn"t rely on training an AI or collecting large datasets to compare the text against. We can only guess this means human written text occupies positive curvature regions, but the source is not clear on this.
This method, called "zero-shot", allows DetectGPT to detect machine written text without any knowledge of the AI that was used to generate it. It operates in stark contrast to other methods which require training "classifiers" and datasets of real and generated passages.
The team tested DetectGPT on a dataset of fake news articles (presumably anything that came out of CNET over the last year) and it outperformed other zero-shot methods for detecting machine-generated text. Specifically, they found that DetectGPT improved the detection of fake news articles generated by 20B parameter GPT-NeoX from 0.81 AUROC for the strongest zero-shot baseline to 0.95 AUROC for DetectGPT. Honestly, this is all French to me, but it purports a substantial improvement in detection performance and suggests that DetectGPT may be a promising way to scrutinize machine-generated text moving forward.
In summary, DetectGPT is a new method for detecting machine-generated text that leverages the unique characteristics of text generated by LLMs. It is a zero-shot method that does not require any additional data or training, making it an efficient and effective tool for identifying machine-generated text. As the use of LLMs continues to grow, the importance of corresponding systems for detecting machine-generated text will become increasingly critical. DetectGPT is a promising approach that could have a significant impact in many areas, and its further development could be beneficial for many fields.
Source: DetectGPT (ericmitchell.ai)