OpenAI has been facing an internal dilemma regarding the release of their system for watermarking ChatGPT-created text. The company has had this technology ready for about a year, but there are conflicting opinions on whether or not to make it public.
The concept of OpenAI’s watermarking involves adjusting how the model predicts the most likely words and phrases that will follow previous ones, creating a detectable pattern. This approach has been found to be “99.9% effective” in making AI text detectable when there is enough of it. This could potentially be beneficial for teachers looking to prevent students from using AI to complete writing assignments.
A survey commissioned by OpenAI revealed that people worldwide supported the idea of an AI detection tool by a margin of four to one. However, nearly 30 percent of ChatGPT users indicated that they would use the software less if watermarking was implemented. Some employees also raised concerns that the technology could be easily circumvented using tactics like translating the text back and forth between languages or adding and deleting emojis.
Despite these challenges, employees at OpenAI still believe that the watermarking approach is effective. In response to user feedback, some have suggested exploring methods that might be less controversial among users, even if they are unproven. The company is at a crossroads in determining the best course of action moving forward.
OpenAI finds itself at a critical juncture in deciding whether to release its watermarking system for ChatGPT-created text. While the technology has shown promise in detecting AI-generated content, concerns about user backlash and potential workarounds have complicated the decision-making process. Moving forward, OpenAI must carefully weigh the benefits and drawbacks of implementing watermarking to ensure the integrity of its text-generating systems.
Leave a Reply