Google Launches Framework to Secure Generative AI

Security

Generative AI is advancing rapidly, but so are creative ways people find to use it maliciously. Many governments are trying to speed up their regulating plans to mitigate the risk of AI misuse.

Meanwhile, some generative AI developers are looking into how they could help secure their models and services. Google, owner of the generative AI chatbot Bard and parent company of AI research lab DeepMind, introduced its Secure AI Framework (SAIF) on June 8, 2023.

SAIF is set to be “a bold and responsible, […] conceptual framework to help collaboratively secure AI technology,” Royal Hansen, Google’s VP of engineering for privacy, safety and security, and Phil Venables, CISO of Google Cloud, wrote in a launching paper.

The effort builds on Google’s experience developing cybersecurity models, such as the collaborative Supply-chain Levels for Software Artifacts (SLSA) framework and BeyondCorp, its zero trust architecture used by many organizations.

Specifically, SAIF is “a first step” designed to help mitigate risks specific to AI systems like theft of the model, data poisoning of the training data, malicious inputs through prompt injection and extracting confidential information in the training data.

SAIF is built around six core principles:

  1. Expand strong security foundations to the AI ecosystem, including leveraging secure-by-default infrastructure protections (e.g. SQL injection mitigation techniques)
  2. Extend detection and response to bring AI into an organization’s threat universe: monitoring inputs and outputs of generative AI systems to detect anomalies and using threat intelligence to anticipate attacks
  3. Automate defenses to keep pace with existing and new threats
  4. Harmonize platform-level controls to ensure consistent security across the organization, starting with Google-owned Vertex AI and Security AI Workbench, and Perspective API, a free and open source API developed by Google’s Jigsaw and Counter Abuse Technology teams that uses machine learning to identify ‘toxic’ comments online
  5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment, including techniques like reinforcement learning based on incidents and user feedback, updating training data sets, fine-tuning models to respond strategically to attacks and red team exercise
  6. Contextualize AI system risks in surrounding business processes by conducting end-to-end risk assessments related to how organizations will deploy AI

“We will soon publish several open-source tools to help put SAIF elements into practice for AI security,” Hansen and Venables said.

They also vowed to expand Google’s bug hunter programs to reward and incentivize research around AI safety and security.

Read more: Ethical Hackers Could Earn up to $20,000 Uncovering ChatGPT Vulnerabilities

Finally, they said that Google was committed to helping develop international standards on AI security, such as the US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and Cybersecurity Framework, as well as ISO/IEC 42001 AI Management System and ISO/IEC 27001 Security Management System standards.

Products You May Like

Articles You May Like

Thousands Download Malicious npm Libraries Impersonating Legitimate Tools
US Government Issues Cloud Security Requirements for Federal Agencies
CISA and EPA Warn of Cyber Risks to Water System Interfaces
DeceptionAds Delivers 1M+ Daily Impressions via 3,000 Sites, Fake CAPTCHA Pages
HubPhish Exploits HubSpot Tools to Target 20,000 European Users for Credential Theft

Leave a Reply

Your email address will not be published. Required fields are marked *