Digital Security
A new white paper from ESET uncovers the risks and opportunities of artificial intelligence for cyber-defenders
28 May 2024
•
,
5 min. read
Artificial intelligence (AI) is the topic du jour, with the latest and greatest in AI technology drawing breathless news coverage. And probably few industries are set to gain as much, or possibly to be hit as hard, as cybersecurity. Contrary to popular belief, some in the field have been using the technology in some form for over two decades. But the power of cloud computing and advanced algorithms are combining to enhance digital defenses further or help create a new generation of AI-based applications, which could transform how organizations protect, detect and respond to attacks.
On the other hand, as these capabilities become cheaper and more accessible, threat actors will also utilize the technology in social engineering, disinformation, scams and more. A new white paper from ESET sets out to uncover the risks and opportunities for cyber-defenders.
A brief history of AI in cybersecurity
Large language models (LLMs) may be the reason boardrooms across the globe are abuzz with talk of AI, but the technology has been to good use in other ways for years. ESET, for example, first deployed AI over a quarter of a century ago via neural networks in a bid to improve detection of macro viruses. Since then, it has used AI in various forms to deliver:
- Differentiation between malicious and clean code samples
- Rapid triage, sorting and labelling of malware samples en masse
- A cloud reputation system, leveraging a model of continuous learning via training data
- Endpoint protection with high detection and low false-positive rates, thanks to a combination of neural networks, decision trees and other algorithms
- A powerful cloud sandbox tool powered by multilayered machine learning detection, unpacking and scanning, experimental detection, and deep behavior analysis
- New cloud- and endpoint protection powered by transformer AI models
- XDR that helps prioritize threats by correlating, triaging and grouping large volumes of events
Why is AI used by security teams?
Today, security teams need effective AI-based tools more than ever, thanks to three main drivers:
1. Skills shortages continue to hit hard
At the last count, there was a shortfall of around four million cybersecurity professionals globally, including 348,000 in Europe and 522,000 in North America. Organizations need tools to enhance the productivity of the staff they do have, and provide guidance on threat analysis and remediation in the absence of senior colleagues. Unlike human teams, AI can run 24/7/365 and spot patterns that security professionals might miss.
2. Threat actors are agile, determined and well resourced
As cybersecurity teams struggle to recruit, their adversaries are going from strength to strength. By one estimate, the cybercrime economy could cost the world as much as $10.5 trillion annually by 2025. Budding threat actors can find everything they need to launch attacks, bundled into readymade “as-a-service” offerings and toolkits. Third-party brokers offer up access to pre-breached organizations. And even nation state actors are getting involved in financially motivated attacks – most notably North Korea, but also China and other nations. In states like Russia, the government is suspected of actively nurturing anti-West hacktivism.
3. The stakes have never been higher
As digital investment has grown over the years, so has reliance on IT systems to power sustainable growth and competitive advantage. Network defenders know that if they fail to prevent or rapidly detect and contain cyberthreats, their organization could suffer major financial and reputational damage. A data breach costs on average $4.45m today. But a serious ransomware breach involving service disruption and data theft could hit many times that. One estimate claims financial institutions alone have lost $32bn in downtime due to service disruption since 2018.
How is AI used by security teams?
It’s therefore no surprise that organizations are looking to harness the power of AI to help them prevent, detect and respond to cyberthreats more effectively. But exactly how are they doing so? By correlating indicators in large volumes of data to identify attacks. By identifying malicious code through suspicious activity which stands out from the norm. And by helping threat analysts through interpretation of complex information and prioritization of alerts.
Here are a few examples of current and near-future uses of AI for good:
- Threat intelligence: LLM-powered GenAI assistants can make the complex simple, analyzing dense technical reports to summarize the key points and actionable takeaways in plain English for analysts.
- AI assistants: Embedding AI “copilots” in IT systems may help to eliminate dangerous misconfigurations which would otherwise expose organizations to attack. This could work as well for general IT systems like cloud platforms as security tools like firewalls, which may require complex settings to be updated.
- Supercharging SOC productivity: Today’s Security Operations Center (SOC) analysts are under tremendous pressure to rapidly detect, respond to and contain incoming threats. But the sheer size of the attack surface and the number of tools generating alerts can often be overwhelming. It means legitimate threats fly under the radar while analysts waste their time on false positives. AI can ease the burden by contextualizing and prioritizing such alerts – and possibly even resolving minor alerts.
- New detections: Threat actors are constantly evolving their tactics techniques and procedures (TTPs). But by combining indicators of compromise (IoCs) with publicly available information and threat feeds, AI tools could scan for the latest threats.
How is AI being used in cyberattacks?
Unfortunately, the bad guys have also got their sights on AI. According to the UK’s National Cyber Security Centre (NCSC), the technology will “heighten the global ransomware threat” and “almost certainly increase the volume and impact of cyber-attacks in the next two years.” How are threat actors currently using AI? Consider the following:
- Social engineering: One of the most obvious uses of GenAI is to help threat actors craft highly convincing and near-grammatically perfect phishing campaigns at scale.
- BEC and other scams: Once again, GenAI technology can be deployed to mimic the writing style of a specific individual or corporate persona, to trick a victim into wiring money or handing over sensitive data/log-ins. Deepfake audio and video could also be deployed for the same purpose. The FBI has issued multiple warnings about this in the past.
- Disinformation: GenAI can also take the heavy lifting out of content creation for influence operations. A recent report warned that Russia is already using such tactics – which could be replicated widely if found successful.
The limits of AI
For good or bad, AI has its limitations at present. It may return high false positive rates and, without high-quality training sets, its impact will be limited. Human oversight is also often required in order to check output is correct, and to train the models themselves. It all points to the fact that AI is neither a silver bullet for attackers nor defenders.
In time, their tools could square off against each other – one seeking to pick holes in defenses and trick employees, while the other looks for signs of malicious AI activity. Welcome to the start of a new arms race in cybersecurity.
To find out more about AI use in cybersecurity, check out ESET’s new report