#RSAC: AI Dominates RSA as Excitement and Questions Surround its Potential in Cybersecurity

Security

Artificial Intelligence (AI) tooling was the hot topic at this year’s RSA Conference, held in San Francisco. The potential of generative AI in cybersecurity tooling has sparked excitement among cybersecurity professionals. However, questions have been raised about the practical usage of AI in cybersecurity and the reliability of the data used to build AI models.

“We are at the top of the first innings of the AI impact. We have no idea of the expansiveness and what we will eventually see in terms of how AI impacts the cybersecurity industry,” M.K. Palmore, cybersecurity strategic advisor and board member at GoogleCloud and Cyversity, told Infosecurity.

“I think we are all hopefully, and certainly at the company I work for, moving in a direction that shows that we see value and use in terms of how AI can have a positive impact on the industry,” he added.

However, as noted by many, Palmore acknowledged that there will indeed be more to come in terms of AI’s development.

“I do not believe we have seen everything that is going to be changed and impacted and as usual as those things evolve, we’ll all have to pivot to accommodate this new paradigm of having these large language models (LLMs) and AI available to us,” he said.

Dan Lohrmann, Field CISO at Presidio, concurred with the sentiment that we are in the early days of AI in cybersecurity.

“I think we’re at the beginning of the game but I think it’s going to be transformative,” he said. Speaking about tools on the exposition floor at RSA, Lohrmann said AI is going to transform a large percentage of the products to follow..

“I think it’s going to change attacks and defend, how we do red teaming, blue teaming for example,” he said.

However, he noted that in terms of streamlining the tools that security teams use, there is still some way to go. “I don’t think we’re ever going to get to a single pane of glass, but this is as close as I’ve seen,” he said, commenting on some of the tools with AI integrated.

Adding AI to Security Tools

During RSA 2023, many companys highlighted how they are using generative AI in security tools.  Google, for example, launched its generative AI tooling and security LLM, Sec-PaLM.

Sec-PaLM is built on Mandiant’s frontline intelligence on vulnerabilities, malware, threat indicators, and behavioral threat actor profiles.

Read more: Google Cloud Introduces Generative AI to Security Tools as LLMs Reach Critical Mass

Steph Hay, director of user experience at Google Cloud, said that LLMs have finally hit a critical mass where they can contextualize information in a way they could not before. “We now have truly generative AI,” she said.

Meanwhile, Mark Ryland, director, Office of the CISO at Amazon Web Services, highlighted how threat detection can be bettered with generative AI.

“We’re very focused on meaningful data and minimizing false positives. And the only way to do that effectively is with machine learning, so that’s been a core part of our security services,” he noted.

The company recently announced new tools for building on AWS that incorporate generative AI, called Amazon Bedrock. Amazon Bedrock, is a new service that makes foundation models (FMs) from AI21 Labs, Anthropic, Stability AI, and Amazon accessible via an API.

In addition, Tenable launched Generative AI security tools specifically designed for the research community.

The announcement was accompanied by a report titled How Generative AI is Changing Security Research, which explores ways in which LLMs can reduce complexity and achieve efficiencies in areas of research including reverse engineering, debugging code, improving web app security and visibility into cloud-based tools.

The report noted that LLM tools, like ChatGPT, are evolving at “breakneck speed.”

Regarding AI tools in cybersecurity platforms, Bob Huber, CSO at Tenable, told Infosecurity, “I think what those tools allow you to do is have a database for yourself, for example if you’re looking to penetration test something and the target is X, what vulnerabilities might there be, normally that’s a manual process and you have to go in and search but [AI] helps you get to those things faster.”

He added that he has seen some companies hooking into open-source LLMs but her noted that there needs to be guardrails on this because of the data the LLM is built on cannot always be verified or is accurate. For LLMS built with organization’s own data it is much more trustworthy.

There are concerns around how hooking into an open-source LLM, like GPT, could impact security. As security practitioners, it is important to know the risks but with generative AI, Huber noted that it has not been around long enough for people to fully understand those risk.

These tools all aim to make the job of the defender easier, but Ismael Valenzuela, vice president of threat research & intelligence at BlackBerry, noted generative AI’s limitations.

“Like any other tool, it’s something we should use as defenders and attackers are going to use as well. But the best way to describe these generative AI tools is that they’re good as an assistant. It’s obvious that it can speed up things for both sides, but do I expect it to revolutionize everything? Probably not,” he said.

Additional reporting by James Coker 

Products You May Like

Articles You May Like

New RedLine Stealer Variant Disguised as Game Cheats Using Lua Bytecode for Stealth
BlackTech Targets Tech, Research, and Gov Sectors New ‘Deuterbear’ Tool
The ABCs of how online ads can impact children’s well-being
Linux Cerber Ransomware Variant Exploits Atlassian Servers
OfflRouter Malware Evades Detection in Ukraine for Almost a Decade

Leave a Reply

Your email address will not be published. Required fields are marked *