ChatGPT's academic paper cheating cannot escape the law: OpenAI's anti-cheating tool to be launched soon

The debate within OpenAI over whether to release AI models such as ChatGPT has continued for more than two years. The company's leadership has struggled to reach a consensus on balancing safety and innovation, leading to a prolonged decision-making process. This ongoing debate highlights the complex ethical issues in AI development.

OpenAI has developed a tool to detect AI-generated content with an accuracy rate of 99.9%. This tool was conceptualized as early as November 2022 but has not yet been made public.

There are two main reasons for this:

  1. Technical aspects:
  • Early success rate was only 26%, later improved to 99.9%
  • Internal concerns about potentially affecting ChatGPT's writing quality
  • Risk of circumvention, such as erasing watermarks through translation
  1. User preferences:
  • Surveys show only 1/4 of people globally support increased detection tools
  • Nearly 30% of ChatGPT users indicate they would reduce usage if watermarks were deployed
  • Potentially greater impact on non-native English users

There is internal controversy at OpenAI about this. Supporters believe it benefits ecosystem development, while opponents worry about losing users. Currently, the tool remains unreleased.

Besides OpenAI, companies like Google and Apple are also developing similar tools, some of which have begun internal testing but haven't officially launched.

According to surveys, users primarily use ChatGPT for writing (21%) and completing homework (18%), which may be one reason for user opposition to detection technology.

OpenAI plans to formulate a strategy by this fall to influence public perception of AI transparency. However, specific countermeasures have not yet been announced.