Ever since ChatGPT burst onto the scene in late 2022, there’s been massive buzz around generative AI.
There’s talk about malicious use cases and how they will overwhelm cybersecurity teams. There’s talk about generative AI automating a significant number of people’s jobs away.
But if you set aside the more fearful speculation, you can see plenty of positives. Technological advancements such as generative AI and large language models (LLMs) are proving to be invaluable resources to many types of work and industries.
So, how can they aid cybersecurity? That’s the question I’ll explore in this analysis.
Threat Intelligence and Detection
One of generative AI’s most promising applications is to conduct threat intelligence and detection. Threat intelligence and threat detection are big data problems. Analysts must efficiently search feeds from numerous sources correlated with tactics, techniques, and procedures (TTPs) and indicators of compromise (IOCs). Communicating all this data across teams is then a significant analysis and synthesis problem.
In an ideal world, when data comes into one team, it can immediately benefit another team. But data silos can occur even within teams that are theoretically tight-knit. LLMs can help. By processing large volumes of data, they can identify patterns and anomalies that may signify a threat.
In disconnected team environments, these types of correlations may not be identified due to a limited scope of data being reviewed. LLMs can analyze data from various sources, such as system logs, network traffic, and user activities, and detect abnormalities that might indicate a breach or an ongoing attack.
It’s likely not feasible for in-house security teams to build and train such models themselves without significant engineering resources. This is a terrific opportunity for major vendors to build reusable models trained on security data. We’re seeing this trend play out with vendors including Palo Alto Networks, which is on the Acceleration Economy Cybersecurity Top 10 Shortlist, and Microsoft.
Which companies are the most important vendors in cybersecurity? Check out
the Acceleration Economy Cybersecurity
Top 10 Shortlist.
Phishing Detection and Prevention
AI models can also play a crucial role in phishing detection and prevention. Cybercriminals often employ social engineering tactics, using convincingly written emails to trick users into revealing sensitive information. LLMs can analyze these emails and detect phishing attempts based on the patterns they have learned, and thus prevent potential breaches.
These models have been employed in other industries to detect sentiment, plagiarism, and make suggestions for improvement. Detection of possible deception is a natural extension of these same linguistic analysis patterns.
Automated Response Generation
With LLMs, cybersecurity teams can automate the generation of responses to security incidents. The incident response process often entails looking at a lot of alerts, merging the context across all of them, and synthesizing this all into a sensible narrative for senior leaders or some other external stakeholder. This effectively comes down to analyzing data and generating summaries based on that data.
Employing LLMs like ChatGPT-3.5 in this process to build out draft summaries (as is happening in many fields) and more effectively communicate what happened within an incident helps drive shared understanding across teams. It would be up to the security engineer working with an LLM to prompt it correctly to provide summaries based on what data and in what voice. This is what we’re seeing in use cases across digital marketing, data analytics, and beyond.
Doing all of this faster and more efficiently is incredibly important within the course of an incident where things are already stressful and the demands on everyone’s time are huge.
Enhanced Security Training
Cybersecurity training is another area where generative AI can make a significant impact. Specifically, with regard to the training and preparation process that goes into incident readiness. Generative AI applications can help teams to generate realistic cybersecurity scenarios, laying the groundwork to train employees on how to respond to different types of cyber threats and stress test their documented incident response plans and policies. These scenarios can be continuously updated and customized by security engineering teams or the vendors they work with based on the latest threat intelligence, providing employees with up-to-date and relevant training opportunities.
Chatbots and Customer Support
Security teams have a reputation for not being strong in the area of approachability and customer support. Other industries have turned to generative AI to build chatbots and auto-responders trained on internal data, FAQs, or piles of email correspondence. Cybersecurity can learn from their example.
By investing in capabilities like this and deploying them where stakeholders are present or trying to engage the cybersecurity team can lead to reduced response times, reduced burden on the security team, and an overall improved user experience.
Concluding Thoughts
Generative AI and LLMs can do much more than generate images and write blog posts: These technologies can augment the important work cybersecurity leaders do on a daily basis, especially considering the massive amounts of data and the time pressures they face. They offer a proactive approach to threat detection, automate routine tasks, enhance training, and improve customer support.
My analysis has only scratched the surface of possible use cases. I believe that as cyber threats continue to evolve, these technologies will play a crucial role in enabling cybersecurity teams and vendors to stay one step ahead as we’re seeing with innovative tools including Microsoft’s CoPilot, which is only scratching the surface of what’s possible.