LinkedIn Pauses GenAI Training Following ICO Concerns

Security

UK data protection regulator the Information Commissioner’s Office (ICO) has welcomed a decision by LinkedIn to stop training its generative AI (GenAI) models on UK users’ information.

Executive director for regulatory risk, Stephen Almond, argued that for organizations to extract maximum value from GenAI, the public must be able to trust that their privacy rights are being respected.

“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO,” he added in a statement.

“We will continue to monitor major developers of generative AI, including Microsoft and LinkedIn, to review the safeguards they have put in place and ensure the information rights of UK users are protected.”

Read more on GenAI privacy risk: Forrester: GenAI Will Lead to Breaches and Fines in 2024

The Microsoft-owned company had last week confirmed that it had added the UK to a list of countries where customers’ data would not currently be used to train AI models.

“When it comes to using members’ data for generative AI training, we offer an opt-out setting,” wrote the firm’s SVP and general counsel, Blake Lawit. “At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice.”

The news comes after Meta last week confirmed it was resuming its own GenAI training program with UK users’ information, following consultation with the ICO.

The practice is effectively banned in the EU at the moment after the Irish Data Protection Commission (DPC) requested Meta pause its project, in a move the social media giant branded as “a step backwards for European innovation.”

Privacy advocates are concerned that GenAI models are being fed huge volumes of user data without those users first providing their fully informed consent. Such tools have also raised concerns over corporate data leaks.

One in five UK businesses has had potentially sensitive company data exposed via employee use of GenAI, a RiverSafe report revealed in April.

Image credit: 13_Phunkod / Shutterstock.com

Products You May Like

Articles You May Like

US Government Issues Cloud Security Requirements for Federal Agencies
Attackers Exploit Microsoft Teams and AnyDesk to Deploy DarkGate Malware
Thousands Download Malicious npm Libraries Impersonating Legitimate Tools
CISA and EPA Warn of Cyber Risks to Water System Interfaces
Akira and RansomHub Surge as Ransomware Claims Reach All-Time High

Leave a Reply

Your email address will not be published. Required fields are marked *