Italy’s Privacy Watchdog Blocks ChatGPT Amid Privacy Concerns

Security

The Italian Data Protection Authority (Garante per la protezione dei dati personali) has temporarily suspended the use of the artificial intelligence (AI) service ChatGPT in the country.

The privacy watchdog opened a probe into OpenAI’s chatbot and blocked the use of the service due to allegations that it failed to comply with Italian data collection rules. The Garante also maintained that OpenAI did not put sufficient measures in place to prevent people aged 13 and below from using ChatGPT.

“We noticed a lack of clear notice to users and all interested parties whose data are collected by OpenAI, but above all, the absence of a legal basis that justifies the collection and massive storage of personal data to ‘train’ the algorithms upon which the platform is based,” reads an announcement (in Italian), published earlier today.

According to Timothy Morris, chief security advisor at Tanium, the heart of the issue in Italy seems to be the anonymity aspect of ChatGPT.

“It comes down to a cost/benefit analysis. In most cases, the benefit of new technology outweighs the bad, but ChatGPT is somewhat of a different animal,” Morris said. “Its ability to process extraordinary amounts of data and create intelligible content that closely mimics human behavior is an undeniable game changer. There could potentially be more regulations to provide industry oversight.”

Further, the Garante lamented the incorrect handling of user data from ChatGPT, resulting from the service’s limitations in processing information accurately. 

“It’s easy to forget that ChatGPT has only been widely used for a matter of weeks, and most users won’t have stopped to consider the privacy implications of their data being used to train the algorithms that underpin the product,” commented Edward Machin, a senior lawyer with Ropes & Gray LLP.

“Although they may be willing to accept that trade, the allegation here is that users aren’t being given the information to allow them to make an informed decision. More problematically […] there may not be a lawful basis to process their data.”

In its announcement, the Italian privacy watchdog also mentioned the data breach that affected ChatGPT earlier this month.

Read more on the ChatGPT breach here: ChatGPT Vulnerability May Have Exposed Users’ Payment Information

“AI and Large Language Models like ChatGPT have tremendous potential to be used for good in cybersecurity, as well as for evil. But for now, the misuse of ChatGPT for phishing and smishing attacks will likely be focused on improving capabilities of existing cybercriminals more than activating new legions of attackers,” said Hoxhunt CEO, Mika Aalto.

“Cybercrime is a multibillion dollar organized criminal industry, and ChatGPT is going to be used to help smart criminals get smarter and dumb criminals get more effective with their phishing attacks.”

OpenAI has until April 19 to answer to the Data Protection Authority. If it does not, it may incur a fine of up to €20 million or 4% of its annual turnover. The company has not yet replied to a request for comment by Infosecurity.

Products You May Like

Articles You May Like

Thai Officials Targeted in Yokai Backdoor Campaign Using DLL Side-Loading Techniques
Lazarus Group Spotted Targeting Nuclear Engineers with CookiePlus Malware
US Government Issues Cloud Security Requirements for Federal Agencies
CISA and EPA Warn of Cyber Risks to Water System Interfaces
Attackers Exploit Microsoft Teams and AnyDesk to Deploy DarkGate Malware

Leave a Reply

Your email address will not be published. Required fields are marked *