The Evolving Role of Generative AI as a Cybersecurity Force Multiplier
Nine months ago, the conversations about generative AI were limited. Now, it’s all everyone wants to talk about. AI is among the most disruptive technologies of our time – offering unmatched speed and efficiency. And because of this, its use is a double-edged sword. Adversaries are increasingly employing AI-based, automated tools to create and launch attacks. At the same time, cybersecurity vendors are considering and building AI competencies into their products as the market demands it.
The demand for generative AI-based cybersecurity platforms and solutions is predicted to reach a market value of $11.2 billion in 2032, up from $1.6 billion in 2022. Canalys estimates that more than 70% of businesses will have their cybersecurity operations supported by generative AI tools within the next five years.
Leading companies are responding quickly to the changing tides. On July 11th, 2023, KPMG announced a $2 billion investment in AI and cloud services through an expanded partnership with Microsoft. Betting on the new technology, the firm expects to generate an additional $12 billion in additional revenue through cybersecurity, cloud migrations, and applications that use generative AI.
AI as a Security Force Multiplier
We’ve all seen the coverage of “AI replacing our jobs,” but for cybersecurity, we’re nowhere near that. For now, we’ve barely scratched the surface of what AI can do. One thing is for sure, AI does not have the human ability of subjective thinking, which is critical for making high-level decisions.
If AI follows the path of past innovative technologies, it is more likely to spur growth rather than eliminate cybersecurity jobs. To harness the full value of AI as a force multiplier, companies will need to hire data analysts and security professionals that can interpret and validate the outputs. On the other side of the coin, with an already gaping talent shortage, generative AI could also help lower-level analysts augment their capabilities.
At a minimum, AI can alleviate repetitive and mundane tasks, such as:
- Creating or increasing the efficiency of scripts for security analysts to use.
- Sifting through vast amounts of data to identify threats.
- Detecting patterns within massive quantities of data that human analysts cannot see. For example, AI could detect that adversaries were using the name Mirai as code for botnet.
- More accurately identifying and examining false positives from learning over time.
- Protecting sensitive data 24/7 without tiring so that threats are identified in real-time rather than putting off detection until the start of the next working day.
How our Partners are Using AI
At Cadre, we pride ourselves on being your cybersecurity partner. That means not only creating an ecosystem of technology partners that we feel are the best fit for protecting your environment but also staying in the know to answer your questions.
So how will AI impact technologies you are using now or considering using in the future? Below are examples of how some of our partners are using or planning to use AI:
Palo Alto Networks: CEO, Nikesh Arora remarked on the company’s latest earnings call that Palo Alto Networks sees “significant opportunity as we begin to embed generative AI into our products and workflows,” adding that the company intends to deploy a proprietary Palo Alto Networks security large language models (LLMs) in the coming year.
Security Scorecard: The first and only security ratings platform to integrate with OpenAI’s GPT-4 system. With natural language processing capabilities, CISOs and practitioners can ask questions about cyber exposures and understand where their security gaps are.
SentinelOne: In April 2023, SentinelOne announced its threat-hunting platform that integrates multiple layers of AI technology. Aimed at defeating malicious attacks, the platform offers real-time, autonomous responses to attacks across the enterprise.
Zscaler: At the Zenith Live 2023 event, the company announced three generative AI projects. Including Security AutoPilot with Breach Prediction, Zscaler Navigator, and Multi-Modal DLP. Zscaler also noted that they rely on customized LLMs to predict breaches and ensure policies are set and executed accurately, with greater precision.
Interested in learning more about these partners and others? Visit our Technology Partners page.