OpenAI Leadership Split Over In-House AI Watermarking Technology

Security

OpenAI has a tool to automatically watermark AI-generated content, but company leadership is split on whether to release it to the public.

According to The Wall Street Journal, the company behind ChatGPT started developing a tool capable of labeling content generated by its large language models (LLMs) two years ago.

People familiar with the matter told the US news outlet that the tool works by slightly changing how the tokens are selected, similar to how Google’s SynthID for Text works.

 Those changes would leave a pattern called a watermark.

According to internal documents, the watermarks created by OpenAI’s in-house tool are 99.9% effective when enough new text is created by ChatGPT.

The tool is allegedly ready to be released, but the project has been mired in internal debate for two years.

One primary concern is that the tool might turn ChatGPT users away from the product.

An April 2023 survey the company conducted with loyal ChatGPT users found nearly a third would be turned off by the technology, mainly because it could detect cheating and plagiarism.

Photo credit: Daniel Chetroni/Shutterstock

Read more: OpenAI Announces Plans to Combat Misinformation Amid 2024 Elections
 

Products You May Like

Articles You May Like

US Government Issues Cloud Security Requirements for Federal Agencies
Attackers Exploit Microsoft Teams and AnyDesk to Deploy DarkGate Malware
Italy’s Data Protection Watchdog Issues €15m Fine to OpenAI Over ChatGPT Probe
DeceptionAds Delivers 1M+ Daily Impressions via 3,000 Sites, Fake CAPTCHA Pages
Lazarus Group Spotted Targeting Nuclear Engineers with CookiePlus Malware

Leave a Reply

Your email address will not be published. Required fields are marked *