ChatGPT, the Chatbot created by OpenAI, has a system that can detect text generated by AI, according to WSJ. This tool has been ready for about a year, but OpenAI hasn’t released it yet. Many people keep on asking why.
Key points
The tool employs an ingenious technique called “watermarking.” When ChatGPT generates something, it keeps some small invisible marks in texts. These marks are like coded messages which can be deciphered only by OpenAI’s special detector. The detector is very good at its job- it can tell when ChatGPT writes with 99.9% accuracy.
Hence, why hide such an amazing tool? There are several reasons for this. Some people inside OpenAI believe releasing this might discourage people from using ChatGPT. In fact, in one research conducted recently showed that approximately thirty percent of ChatGPT users would reduce their usage if they find out that there are watermarks that could be detected in their conversations’.
Additionally, they are worried about false positives and negatives cases or accusations as well. For example, a fear exists among some users that the system might inaccurately claim human written text as machine generated words coming from AI programs.

How different groups of people will be affected is something that OpenAI is thinking about when considering this tool; it can cause problems even for those non-native speakers who utilize artificial intelligence language model while writing assignments or any other form of composition work. In addition to this there is also the concern of clever individuals who may take away or mask these clues based on knowledge of how the system functions.
Conversely there are solid grounds for releasing this tool too. A lot of teachers and schools are concerned with pupils tricking them through ChatGPT application while doing assignments given to them for grading purposes especially during testing periods which end up giving misleading results due to false presentation of information talked about above on the point number one related to chatbots. The developers of such a tool could help in catching students who attempt to cheat using it.
Moreover, the device might also be used in the struggle against disinformation and fake news created by AI systems. OpenAI has acknowledged that they advocate for transparency when it comes to their AI technology. Concealing this tool contradicts this objective set by the company itself. In a recent OpenAI survey, four times as many respondents said that they wanted the release of the tool as those who opposed its release.
Meanwhile, other firms are developing their own artificial intelligence (AI) identification tools amid OpenAI’s ongoing debate over what should be done next. However, none of them is better than OpenAI’s hidden tool.
The decision is not a simple one though. But OpenAI has to balance honesty and being helpful on one side with maintaining their business in another way or ways. Additionally, they have to consider possible applications or misuse of this detector. For now ChatGPT detector remains under wraps but will OpenAI eventually release it? Only time will tell.