OpenAI Delays Release of AI Watermarks Over Fear of Losing Business

Rosslyn Elliott / Updated Aug 06, 2024 | Pub. Aug 06, 2024

OpenAI has developed a highly effective method to watermark AI-generated text but has not released it due to concerns about losing users. The Wall Street Journal broke the story August 4 in an exposé of OpenAI’s long internal conflict over the AI watermarking tool.

What Are AI Watermarks?

AI watermarks are subtle patterns embedded into AI-generated content that can be detected by specialized software. For text, watermarking often involves slightly altering how the AI model selects words and phrases, creating a word-based “fingerprint" that indicates the text was created by AI.

OpenAI’s watermarking technology is 99.9% accurate at identifying text created by its popular ChatGPT tool. However, the company has been debating internally for nearly two years about whether to make this tool available.

Arguments For and Against Releasing Watermarking Tools

Reasons to Release

Proponents argue that releasing AI watermarking tools would have the following positive results:

  • Help educators detect AI-generated essays and assignments
  • Allow researchers to better study the spread of AI content online
  • Promote transparency around AI use in content creation
  • Potentially reduce harmful applications of AI text generation

As one anonymous OpenAI employee told The Wall Street Journal, “It’s just a matter of pressing a button" to release the technology. Many feel the potential benefits outweigh the downsides.

 

ChatGPT and OpenAI logos on smartphones held in hands

Reasons for Hesitation

The Verge reports that nearly 30% of surveyed ChatGPT users said they would use it less if watermarking was implemented.

But is this a reason to hesitate? Why would watermarking make users more reluctant to use AI? Unfortunately, the answer is that users who are afraid to be caught using AI are probably those who are not supposed to use it. College students, for example, are supposed to do their own writing in order to develop thinking skills.

So, is it ethical for ChatGPT to withhold an AI watermark because the company wants to remain popular with cheaters?

Experts at the Brookings Institution brought up additional concerns that are more legitimate:

  • Some fear a mandatory watermark could stifle innovation in AI applications
  • Bad actors may find ways to remove or bypass watermarks

Public Sentiment Favors Watermarking

Despite OpenAI’s hesitation, public opinion seems to support increased transparency around AI-generated content.

The Wall Street Journal cites an OpenAI-commissioned survey that found people worldwide supported AI detection tools by a 4-to-1 margin.

 

image of globe map

Worldwide, people approve of AI detection tools by 4:1 margin

OpenAI’s Response to Criticism

In a blog post update, OpenAI addressed the criticism surrounding their watermarking efforts.

The company acknowledged that while their text watermarking method is highly effective against minor forms of tampering, it remains vulnerable to determined hacking.

OpenAI also expressed concerns about impacts of watermarks on non-native English speakers. As an alternative, the company is exploring metadata-based methods, which could offer benefits like zero false positives due to cryptographic signing.

OpenAI was one of the seven companies that pledged to watermark AI content at a White House meeting in July 2023. The appearance of lengthy stalling on that commitment has not increased public confidence in OpenAI.

Reddit Responds with Skepticism

On Reddit, users expressed skepticism about OpenAI’s motivations.

User tyrenanig commented with sarcasm: “Oh wow lol I never thought they would be this blatant. ‘Oh it’s the right thing to do but our users will go away because they would be labeled and we will get less investment.'"

Another user, Sudden-Degree9839, saw potential benefits for content creators. “Great news for authors and publishers. All manuscripts should be scanned for the watermark & be thrown out if detected."

These comments reflect a broader public sentiment that favors transparency in AI-generated content.

Government Policy for AI Transparency Rules

Policymakers are taking notice of the need for AI content labeling.

Still, creating effective policy remains challenging, as AI is moving faster than the sluggish wheels of government can turn.

More and more, polls seem to indicate a strong grassroots sentiment against generative AI. CEOS and venture capitalists continue to serve as the technology’s chief cheerleaders, while the general public is not enthusiastic.

The Center for AI Policy conducted polls revealing that 78% of Americans think AI should be more regulated. And 62% of Americans support regulating quickly to prevent potential harm.

 

American flag

A majority of Americans want regulation for AI

The Broader Debate on AI Ethics

The watermarking issue ties into larger conversations about the ethics of generative AI.

Critics argue that large language models like ChatGPT are essentially plagiarizing copyrighted material used in their training data.

There are also concerns about AI’s potential to flood the internet with synthetic content, making it harder to distinguish fact from fiction.

Effective watermarking could help address some of these issues.

Moving Forward: Safe AI?

As AI capabilities continue to advance, finding the right balance between innovation and responsible development will be crucial.

Generative AI companies repeatedly state their intention to develop “safe AI,” but then seem to shy away from actually putting guardrails in place.

Government regulation may be the only way to prevent some of the potential harms of generative AI both to the economy and to public trust.

With government agencies becoming more assertive in the regulation of big tech in the past few years, change may be on the way.

 

Related Posts