OpenAI Announces Plans to Combat Misinformation Amid 2024 Elections

Security

With elections expected to occur in over 50 countries in 2024, the misinformation threat will be top of mind.

OpenAI, the developer of the AI chatbot ChatGPT and the image generator DALL-E, has announced new measures to prevent abuse and misinformation ahead of big elections this year.

In a January 15 post, the firm announced that it was collaborating with the National Association of Secretaries of State (NASS), the oldest non-partisan professional organization for public officials in the US, to prevent the use of ChatGPT for misinformation ahead of the US Presidential Election in November.

For instance, when asked questions about the election, such as where to vote, OpenAI’s chatbot will direct users to CanIVote.org, the authoritative website on US voting information.

“Lessons from this work will inform our approach in other countries and regions,” the firm added.

Fighting Deepfakes with Cryptographic Watermarking

To prevent deepfakes, OpenAI also said it will implement the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials for images generated by DALL-E 3, the latest version of its AI-powered image generator.

C2PA is a project of the Joint Development Foundation, a Washington-based non-profit that aims to tackle misinformation and manipulation in the digital age by implementing cryptographic content provenance standards.

Its main initiatives are the Content Authenticity Initiative (CAI) and Project Origin.

Several major companies, including Adobe, X and The New York Times – which has recently sued OpenAI and Microsoft for copyright infringement – are members of the coalition and actively support the development of the standard.

Finally, OpenAI said it was experimenting with a provenance classifier, a new tool for detecting images generated by DALL-E.

“Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers – including journalists, platforms, and researchers – for feedback.”

Google DeepMind has developed a similar tool for digitally watermarking AI-generated images and audio with SynthID. Meta is also experimenting with a similar watermarking tool for its image generator, although Mark Zuckerberg’s company has shared little information about it.

A Move in the Right Direction

Speaking to Infosecurity, Alon Yamin, co-founder and CEO of AI-based text analysis platform Copyleaks, encouraged OpenAI’s commitment against misinformation but warned it could be challenging to implement.

“Going into this election year, considered one of the biggest in recent history, and not just in America but worldwide, there is a lot of concern about how AI will be misused for political campaigns, etc., and that concern is fully justified. So, to see OpenAI taking initial steps to remove potential AI abuse is encouraging. But as we’ve witnessed with social media over the years, these actions can be difficult to implement due to the vast size of a user base,” he said.

In the UK, where the next general election should be held between mid-2024 and January 2025, the Information Commissioner’s Office (ICO) launched a consultation series on generative AI on January 15.

The first chapter is open until March 1.

Products You May Like

Articles You May Like

THN Recap: Top Cybersecurity Threats, Tools, and Practices (Nov 04 – Nov 10)
EU Ramps Up Cyber Resilience with Major Crisis Simulation Exercise
Bitfinex Hacker Jailed for Five Years Over Billion Dollar Crypto Heist
Palo Alto Networks Confirms New Zero-Day Being Exploited by Threat Actors
CISOs Turn to Indemnity Insurance as Breach Pressure Mounts

Leave a Reply

Your email address will not be published. Required fields are marked *