Despite Bans, AI Code Tools Widespread in Organizations

Security

Organizations are concerned about security threats stemming from developers using AI, according to a new Checkmarx report.

The cloud-native application security provider found that 15% of organizations explicitly prohibit the use of AI tools for code generation, however 99% say that AI code-generating tools are being used regardless.

Meanwhile, just 29% of organizations have established any form of governance for the use of generative AI.

Read now: 70% of Businesses Prioritize Innovation Over Security in Generative AI Projects

These findings are part of the firm’s Seven Steps to Safely Use Generative AI in Application Security report, published on July 25, 2024.

The report included findings from 900 CISOs and application security professionals in companies in North America, Europe and Asia-Pacific with annual revenue of $750m or more.

CISOs Grapple with Generative AI Strategies

The report found that 70% of security professionals say there is no centralized strategy for generative AI, with purchasing decisions made on an ad-hoc basis by individual departments.

The company noted that CISOs are looking to build the right types of governance in order to permit their application development teams to use AI coding tools.

According to Checkmarx, 47% of respondents indicated interest in allowing AI to make unsupervised changes to code.

However, generative AI is currently unable to follow secure coding practices or to produce truly secure code, which motivates some security teams to consider AI-driven security tools to help manage the proliferation of development teams’ AI-generated code.

Many are worried about GenAI attacks like AI hallucinations and 80% are worried about security threats stemming from developers using AI.

“Enterprise CISOs are grappling with the need to understand and manage new risks around generative AI without stifling innovation and becoming roadblocks within their organizations,” said Sandeep Johri, CEO at Checkmarx. “GenAI can help time-pressured development teams scale to produce more code more quickly, but emerging problems such as AI hallucinations usher in a new era of risk that can be hard to quantify.”

Products You May Like

Articles You May Like

Attackers Exploit Microsoft Teams and AnyDesk to Deploy DarkGate Malware
Italy’s Data Protection Watchdog Issues €15m Fine to OpenAI Over ChatGPT Probe
Ukraine’s Security Service Probes GRU-Linked Cyber-Attack on State Registers
CISA and EPA Warn of Cyber Risks to Water System Interfaces
HubPhish Exploits HubSpot Tools to Target 20,000 European Users for Credential Theft

Leave a Reply

Your email address will not be published. Required fields are marked *