
I regularly report on the measures companies must take to ensure cyber resilience after integrating AI technologies into their business processes. However, one area that has long been discussed as a significant emerging threat to businesses is AI’s use as a vector in cyberattacks.
Recently, Google Cloud updated its findings on what these threats are, what they look like, and how companies can help prevent the fallout. This study is one of the most timely resources available for business leaders looking to learn more about this growing threat.
The research goes beyond theoretical discussions to demonstrate how attackers are leveraging AI and why it is imperative for cybersecurity defenses to evolve alongside these new tactics.
The Emerging Threat Landscape
In the “GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use” report, the Google Threat Intelligence Group (GTIG) reveals its findings that show cyber criminals are increasingly integrating AI into their attacks.
“By identifying these early indicators and offensive proofs of concept, GTIG aims to arm defenders with the intelligence necessary to anticipate the next phase of AI-enabled threats, proactively thwart malicious activity, and continually strengthen both our classifiers and model,” reads the report.
The report focuses on five attack methods: Model Extraction Attacks, AI-Augmented Operations, Agentic AI, AI-Integrated Malware, and the Underground “Jailbreak” Ecosystem.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
- Model Extraction Attacks: A model extraction attack (MEA) occurs when an attacker gains legitimate access to a large language model with the intention of extracting data to train another model. These attacks are on the rise. And although they don’t present a direct threat to the average consumer, they do impact companies developing models who invest time, and massive funds, into the process. To counter the threat, the report says: “Organizations that provide AI models as a service should monitor API access for extraction or distillation patterns.” The method used to carry out an MEA is called knowledge distillation.
- AI-Augmented Operations: The report found that, in the case of Google’s Gemini at least, adversaries backed by governments were using the model for: coding and scripting, target reconnaissance, vulnerability research, post-compromise actions. The goal was to enhance the capabilities of a phishing attack. One example cited in the report is the North Korean government-backed cyber attacker UNC2970, which used Gemini to gather open-source intelligence and “profile high-value targets to support campaign planning and reconnaissance.”
- Agentic AI: In the previous iteration of the report, GTIG found that cybercriminals were using AI to develop new capabilities in malware. This trend continues to grow. Threat actors have now started to explore the use of agentic AI capabilities to create more complex malware and attack tools. While the report states that these capabilities have yet to be observed “in the wild,” Google anticipates that malicious tools and services claiming to utilize agentic AI capabilities will increasingly emerge on the blackmarket.
- AI-Integrated Malware: The report identified new malware families, such as HONESTCUE, which incorporate AI. “We expect threat actors will continue to incorporate AI throughout the attack lifecycle including: supporting malware creation, improving pre-existing malware, researching vulnerabilities, conducting reconnaissance, and/or generating lure content,” reads the report.
- Underground “Jailbreak” Ecosystem: The report found that many of these AI-driven attack toolkits are available on the underground marketplace. Despite claiming to be developed independently, they are actually supported by “jailbroken” publicly available APIs and open-source MCP servers. For example, the toolkit, Xanthorox, is marketed as a “bespoke, privacy-preserving self-hosted AI” designed for malware generation, ransomware, and phishing. However, it relies on third-party and commercial AI products, including Gemini.
Final Thoughts
Throughout the report, Google Cloud provides examples of how it is combating the threats it has uncovered. However, it’s important to view these findings in a broader context.
AI-driven cyber threats have evolved from the ideation phase to the implementation phase. The next step is agentic-driven cyber attacks, which will no doubt send the capabilities of attackers into the stratosphere.
It’s essential to stay updated on these findings because, while this report in particular, clearly reflect the work Google is doing, they have significant ramifications for all AI ecosystems.
The key is constant vigilance and a dedication to continually enhance cyber resilience capabilities — not just annually or monthly, but consistently.
Ask Cloud Wars AI Agent about this analysis





