Most practitioners operating in modern enterprise environments recognize that cybersecurity is increasingly a data problem. Organizations are struggling with a constant barrage of alerts. But CrowdStrike’s new AI-powered Indicators of Attack (IOA) could change that. Using cloud technology and machine learning (ML), IOAs can spot threats faster and more accurately than ever before. In this analysis, I’ll look at how CrowdStrike, along with Google Cloud’s AI Defense Initiative, is ushering in a new era of smarter cybersecurity that’s ready to face the challenges ahead.
Addressing Data Overload
From countless alerts, notifications, indicators, and telemetry, organizations are drowning trying to make sense of data. Humans simply have issues analyzing data points at the scale and pace that machines produce them. Crowdstrike’s recently announced IOAs can analyze trillions of data points to help predict and stop threats at an unprecedented pace. The AI-powered IOAs utilize real-time intelligence to analyze events at runtime, dynamically generating and issuing alerts to sensors across the network and enterprise, thereby detecting and preventing malicious activities at scale.
The IOAs help address long-standing challenges such as false positives, which can drain limited practitioner time, as well as facilitate automated prevention of malicious activities, and even detect emerging classes of threats that don’t have a formal designation or identifier quite yet.
CrowdStrike also looks to facilitate getting “left of boom” as it is called, or in other words, staying ahead of an actual compromise of IT systems and data. Often there are indicators of attack prior to the successful compromise of a system or organization. If organizations can leverage these indicators, they can halt their impact before a full-on compromise of sensitive data or systems.
Defender’s Dilemma
In addition to Crowdstrike, there’s another industry leader demonstrating support when it comes to using AI to mitigate cybersecurity threats; that’s Google Cloud, whose AI Defense Initiative I recently covered in a Cybersecurity Minute.
The AI Cyber Defense Initiative states that AI can be leveraged to address the “defender’s dilemma,” which is the inability of defenders to keep pace with threats. Google is working with strategic partners such as the University of Chicago and Carnegie Mellon University to develop research and capabilities to use AI for cyber defensive purposes.
Additionally, Google is working with 17 startups with a global presence across the U.S., U.K., and EU to cultivate capabilities powered by AI for cyber defense. As depicted in the image below from Google, there are simply exponentially more attackers than defenders, and the attackers only need to be right once, where defenders must be right every time.
In its publication “How AI Can Reverse the Defender’s Dilemma”, Google lays out various use cases where AI can provide value to defenders. This includes summarizing complex and voluminous data like vulnerability reports, suspicious behavior, and incident investigations. It also includes classifying critical insights such as malware or the identification of vulnerabilities in code, which can be categorized and prioritized accordingly. AI can also facilitate attack path simulations and monitor the performance of security controls to perform notifications associated with control failures.
Lastly, Google proposes that AI be used to create useful capabilities such as the creation of detection rules, generating security orchestration and response playbooks, and developing identity and access management (IAM) rules and policies to help implement least-privileged access control.
Ask Cloud Wars AI Agent about this analysis
Conclusion
There’s a longstanding workforce challenge in cybersecurity. Organizations simply can’t attract and retain sufficient cybersecurity talent, which often results in trying to mitigate threats while understaffed and outgunned. Leveraging AI can help flip this paradigm, and help organizations use technology to address workforce gaps while keeping pace with the global, dynamic cybersecurity threats they face.
While there are undoubtedly valid concerns associated with the secure use of AI, defenders also need to view AI as a tool that can be used to make them more effective, which is exactly what their attack counterparts are doing. By leveraging AI-powered tooling and capabilities, cybersecurity leaders and practitioners can address the defender’s dilemma as well as longstanding challenges such as the workforce challenge while dealing with the sheer scale and complexity of modern cloud-native environments.