The threat landscape has taken on a new dimension with the recent proliferation of generative artificial intelligence (AI), a powerful technology that can create and mimic human-like content. A 2019 incident helps illustrate what this new threat looks like in action: Hackers used AI-generated voice technology to target a UK energy company. They mimicked the CEO’s voice and tricked his subordinate into transferring approximately $243,000 to a fraudulent account.
This incident is just the tip of the iceberg, not only when it comes to AI-generated deepfake audio’s possibilities but also with malicious actors‘ expanding use of other AI forms.
In this analysis, I’ll explore three of the most significant risks associated with the use of generative AI by attackers and provide insights into how chief information security officers (CISOs) can effectively safeguard their organizations against these emerging threats.
Amplifying Social Engineering Attacks
Generative AI can mimic human-like behaviors and create realistic content, which can significantly enhance social engineering attacks. Malicious actors can utilize AI-powered chatbots to craft convincing and tailored messages to deceive individuals into disclosing sensitive information or clicking on malicious links.
In the past, one of the first lines of defense against this type of attack was easy-to-pick-out spelling and grammatical errors. Malicious emails emanating from outside the United States would often be written by non-native English speakers using Google Translate. A popular example, meant to mimic an email from Microsoft, started with, “We detected something to use an application to sign in to your Windows Computer.” With AI, such messages will become more sophisticated, and it will become increasingly challenging to distinguish between AI-generated content and genuine human interactions.
Which companies are the most important vendors in cybersecurity? Check out
the Acceleration Economy Cybersecurity
Top 10 Shortlist.
To combat this risk, CISOs should focus on employee education and awareness programs. Since the landscape is changing, staff need to be given other ways to pick out potential malicious emails. For instance, by appending “EXTERNAL” to the subject line of an email from outside your organization, CISOs can make it easier to distinguish a spoofed email from an actual internal one.
Shortly after one of my previous employers put this into practice, our chief financial officer (CFO) received a phishing email pretending to be from our CEO asking her to approve a seven-figure acquisition. Due to the email being easy to identify as coming from outside the organization, she picked up the phone to verify. The organization avoided a significant loss.
With older methods of identifying malicious emails rendered useless by generative AI, this one gives staff another arrow in their quiver. Additionally, implementing multi-factor authentication and stringent access controls can add an extra layer of security, making it more difficult for hackers to exploit AI-generated messages.
Evading Traditional Security Defenses
Generative AI algorithms can be trained to detect and exploit vulnerabilities in security systems, evading traditional defenses such as signature-based detection and rule-based filters. This technique leverages the power of AI to streamline the process of discovering and exploiting vulnerabilities, enabling malicious actors to identify and exploit them at scale and minimizing the manual effort required by the attackers. By automating the process, attackers can rapidly target numerous systems or software instances, increasing their chances of successfully compromising a target. This puts organizations at risk of data breaches, unauthorized access, and other security incidents.
To help mitigate this risk, CISOs should adopt a proactive approach by implementing advanced threat detection and response systems. Fighting fire with fire, security teams can use AI-powered cybersecurity tools to help stay one step ahead of hackers. AI-powered threat intelligence platforms such as Anomali or Recorded Future employ machine learning algorithms to analyze large datasets, detect anomalies, and identify potential threats. By embracing AI as a defensive tool, organizations can effectively enhance their ability to detect and respond to emerging threats.
“There’s definitely opportunity when it comes to performing defensive activities using AI and LLMs. But they are not perfect. There are obviously challenges still there, but it’s great to see vendors in the ecosystem starting to apply some of this to their activities,” says Acceleration Economy cybersecurity practitioner analyst, and CISO, Chris Hughes.
Creating AI-Generated Malware and Adaptive Threats
Attackers can train AI algorithms to generate sophisticated malware and automate vulnerabilities. Hackers can use AI-powered tools to generate polymorphic malware (malware that can dynamically change its code structure and appearance while retaining its core functionality). Traditionally, malware authors would have to manually modify their code to create new variants or use simple encryption techniques to obfuscate their malware. This new ability poses a significant challenge for cybersecurity professionals as they grapple with increasingly intelligent and adaptive threats.
To address this, CISOs should explore innovative approaches to cybersecurity defense. One such approach is adopting behavioral-based alerting systems instead of relying solely on legacy signature-based detection methods. By implementing advanced AI-driven technologies, security teams can leverage behavioral analytics to identify abnormal patterns and behaviors that may indicate malicious activity. Palo Alto and Trend Micro, two companies on the Acceleration Economy Cybersecurity Top 10 Shortlist, provide top-notch behavior-based malware detection tools. These systems analyze user behavior, network traffic, and system activities to detect deviations from normal patterns, enabling early detection and response to potential threats.
Adapt to Counter Emerging Threats
As the cybersecurity landscape evolves, organizations must adapt their strategies and defenses to counter emerging threats. Generative AI has the potential to revolutionize numerous industries in a positive way, but it also presents new challenges in the hands of malicious actors. It is my strong belief that amid these challenges lies an opportunity for cybersecurity to innovate and adapt. By staying informed, embracing advanced technologies, and fostering a culture of proactive defense, CISOs can effectively mitigate the risks associated with generative AI.