news

Don’t dare to use ChatGPT to write water papers! OpenAI’s anti-cheating tool is exposed

2024-08-05

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

Check if the content is usedChatGPT, with an accuracy rate of up to99.9%

This tool comes fromOpenAI

It can be used specifically to detect whether ChatGPT has been used to write a paper/assignment. The idea was proposed as early as November 2022 (the same month ChatGPT was released).

but!

Such a useful thing, but it is used by the internalHidden for 2 years, which has not yet been made public.

Why?

OpenAI conducted a survey of loyal users and found thatNearly one-thirdOf the people said that if they used anti-cheating tools, they would abandon ChatGPT. And it may also have a greater impact on non-English speaking users.

But some people within the company have suggested that using anti-cheating methods would be beneficial to the OpenAI ecosystem.

The two sides have been arguing and the watermark detection tool has not been released.

In addition to OpenAI, companies such as Google and Apple have also prepared similar tools. Some have already started internal testing, but none have been officially launched.

ChatGPT was discussed before it was released

After ChatGPT became popular, many high school and college students used it to do their homework, so how to identify AI-generated content has become a hot topic in the circle.

Judging from the latest information revealed, OpenAI had considered this issue long before the release of ChatGPT.

The person who developed this technology wasScott Aaronson, who works on safety at OpenAI and is a professor of computer science at the University of Texas.

In early 2023, one of the co-founders of OpenAIJohn Schulman, outlined the tool’s pros and cons in a Google document.

Company executives then decided they would seek advice from a range of people before taking further action.

In April 2023, a commissioned survey by OpenAI showed that only1/4of people support increasing testing tools.

In the same month, OpenAIChatGPT usersWe conducted a survey.

The results show that there areNearly 30%of users said they would use ChatGPT less if ChatGPT deployed watermarks.

Since then, there has been constant controversy surrounding the tool's technical maturity and user preferences.

In early June of this year, OpenAI convened senior employees and researchers to discuss the project again.

It is said that everyone finally agreed that although the technology is mature, the results of last year's ChatGPT user survey cannot be ignored.

Internal documents show that OpenAI believes they need toBefore this fallDevelop a plan to influence public opinion on AI transparency.

However, until the news was revealed, OpenAI has not revealed any relevant countermeasures.

Why not make it public?

To summarize the reasons why OpenAI has been slow to release this technology,There are two main aspects: One is technology, the other is user preference.

Let’s talk about technology first. As early as January 2023, OpenAI developed a technology for identifying texts from multiple AI models (including ChatGPT).

The technology uses a method similar to "watermarking" to embed invisible marks into text.

That way, when someone analyzes the text with a detection tool, the detector can provide a score indicating how likely it is that the text was generated by ChatGPT.

However, the success rate was only 26% at the time, and OpenAI withdrew after only 7 months.

Later, OpenAI gradually raised the success rate of the technology to 99.9%, and technically speaking, the project could have been released about a year ago.

However, another controversy surrounding the technology is that internal employees believe that the technology may harmChatGPT Writing Quality

At the same time, employees also raised some questions about“People may circumvent watermarks”potential risks.

For example, college students all know how to translate text into another language, and then translate it back again through methods like Google Translate, which may result in it being erased.

For example, some people come up with a "every policy has a countermeasure". Once more people publicly use the watermark tool, netizens will definitely come up with a cracked version in minutes.

In addition to technology, another major obstacle is users. Multiple surveys conducted by OpenAI show that users do not seem to be optimistic about this technology.

This also has to mention that usersWhat are you doing with ChatGPT?

This question can be referred to a survey by The Washington Post, in which they looked at nearly 200,000 English chat records from the WildChat dataset, which were generated by humans and two robots built on ChatGPT.

It can be seen that people mainly use ChatGPT toWriting (21%)as well asHelp with homework (18%)

In this light, it seems understandable that people oppose this testing technology.

So, do you agree to add a watermark when using tools like ChatGPT?