Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Ask Copilot
  • Agentic AI Battleground
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Agentic AI Battleground
    • Login / Register
Cloud Wars
    • Login / Register
Home » Cybercriminals Are Operationalizing AI: New Findings from Google Threat Intelligence Group Reveal Escalating Risks
AI and Copilots

Cybercriminals Are Operationalizing AI: New Findings from Google Threat Intelligence Group Reveal Escalating Risks

Kieron AllenBy Kieron AllenFebruary 25, 20264 Mins Read
Facebook Twitter LinkedIn Email
Share
Facebook Twitter LinkedIn Email

I regularly report on the measures companies must take to ensure cyber resilience after integrating AI technologies into their business processes. However, one area that has long been discussed as a significant emerging threat to businesses is AI’s use as a vector in cyberattacks.

Recently, Google Cloud updated its findings on what these threats are, what they look like, and how companies can help prevent the fallout. This study is one of the most timely resources available for business leaders looking to learn more about this growing threat.

The research goes beyond theoretical discussions to demonstrate how attackers are leveraging AI and why it is imperative for cybersecurity defenses to evolve alongside these new tactics.

The Emerging Threat Landscape

In the “GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use” report, the Google Threat Intelligence Group (GTIG) reveals its findings that show cyber criminals are increasingly integrating AI into their attacks.

“By identifying these early indicators and offensive proofs of concept, GTIG aims to arm defenders with the intelligence necessary to anticipate the next phase of AI-enabled threats, proactively thwart malicious activity, and continually strengthen both our classifiers and model,” reads the report.

The report focuses on five attack methods: Model Extraction Attacks, AI-Augmented Operations, Agentic AI, AI-Integrated Malware, and the Underground “Jailbreak” Ecosystem.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

  • Model Extraction Attacks: A model extraction attack (MEA) occurs when an attacker gains legitimate access to a large language model with the intention of extracting data to train another model. These attacks are on the rise. And although they don’t present a direct threat to the average consumer, they do impact companies developing models who invest time, and massive funds, into the process. To counter the threat, the report says: “Organizations that provide AI models as a service should monitor API access for extraction or distillation patterns.” The method used to carry out an MEA is called knowledge distillation.
  • AI-Augmented Operations: The report found that, in the case of Google’s Gemini at least, adversaries backed by governments were using the model for: coding and scripting, target reconnaissance, vulnerability research, post-compromise actions. The goal was to enhance the capabilities of a phishing attack. One example cited in the report is the North Korean government-backed cyber attacker UNC2970, which used Gemini to gather open-source intelligence and “profile high-value targets to support campaign planning and reconnaissance.”
  • Agentic AI: In the previous iteration of the report, GTIG found that cybercriminals were using AI to develop new capabilities in malware. This trend continues to grow. Threat actors have now started to explore the use of agentic AI capabilities to create more complex malware and attack tools. While the report states that these capabilities have yet to be observed “in the wild,” Google anticipates that malicious tools and services claiming to utilize agentic AI capabilities will increasingly emerge on the blackmarket.
  • AI-Integrated Malware: The report identified new malware families, such as HONESTCUE, which incorporate AI. “We expect threat actors will continue to incorporate AI throughout the attack lifecycle including: supporting malware creation, improving pre-existing malware, researching vulnerabilities, conducting reconnaissance, and/or generating lure content,” reads the report.
  • Underground “Jailbreak” Ecosystem: The report found that many of these AI-driven attack toolkits are available on the underground marketplace. Despite claiming to be developed independently, they are actually supported by “jailbroken” publicly available APIs and open-source MCP servers. For example, the toolkit, Xanthorox, is marketed as a “bespoke, privacy-preserving self-hosted AI” designed for malware generation, ransomware, and phishing. However, it relies on third-party and commercial AI products, including Gemini.

Final Thoughts

Throughout the report, Google Cloud provides examples of how it is combating the threats it has uncovered. However, it’s important to view these findings in a broader context.

AI-driven cyber threats have evolved from the ideation phase to the implementation phase. The next step is agentic-driven cyber attacks, which will no doubt send the capabilities of attackers into the stratosphere.

It’s essential to stay updated on these findings because, while this report in particular, clearly reflect the work Google is doing, they have significant ramifications for all AI ecosystems.

The key is constant vigilance and a dedication to continually enhance cyber resilience capabilities — not just annually or monthly, but consistently.


Ask Cloud Wars AI Agent about this analysis

Interested in Google Cloud?

Schedule a discovery meeting to see if we can help achieve your goals

Connect With Us

Book a Demo

data featured Google Cloud phishing vulnerability
Share. Facebook Twitter LinkedIn Email
Analystuser

Kieron Allen

Cloud, AI, Innovation
Cloud Wars analyst

Areas of Expertise
  • Business Apps
  • Cloud
  • Cybersecurity
  • Data
  • LinkedIn

Kieron Allen is a Cloud Wars Analyst examining innovations in, and the future impact of, the latest AI, cloud, cybersecurity, and data technology developments. In his ongoing analyses and video reports, Allen focuses on the platforms, applications, people, and ideas that will mold our digital future. After serving as the Online Editor for BBC Sky at Night Magazine and as the Editorial Assistant for BBC Focus Magazine, Kieron became a freelance journalist in 2015 where his focus on the business technology market became a key passion. Kieron partners with technology start-ups and organizations that share his interests in science, social affairs, non-profit work, fashion and the arts.

  Contact Kieron Allen ...

Related Posts

World’s Hottest Cloud and AI Vendors: #1 Palantir, #2 Google Cloud, #3 Oracle

March 17, 2026

AI Agent & Copilot Podcast: Microsoft Data Scientists Vaishali Vinay and Raghav Batta on AI for Cyber Defense

March 17, 2026

Hypergrowth Returns!! Palantir 70%, Google Cloud 48%, Oracle 44%

March 17, 2026

Oracle Unleashes 1,000+ AI Agents to Automate Entire Industries

March 16, 2026
Add A Comment

Comments are closed.

Recent Posts
  • World’s Hottest Cloud and AI Vendors: #1 Palantir, #2 Google Cloud, #3 Oracle
  • AI Agent & Copilot Podcast: Microsoft Data Scientists Vaishali Vinay and Raghav Batta on AI for Cyber Defense
  • Hypergrowth Returns!! Palantir 70%, Google Cloud 48%, Oracle 44%
  • Oracle Unleashes 1,000+ AI Agents to Automate Entire Industries
  • Microsoft-Anthropic Ties Reinforce Model Choice and Flexibility Commitment

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks and Reports

elevaite365 Test Automation: Turning Software Testing into a Strategic Asset with AI

March 6, 2026

Driving Business Transformation with Agentic AI and ServiceNow

January 9, 2026

The Agentic Enterprise: How Microsoft and Industry Leaders Are Redefining Work Through AI

September 2, 2025

SAP Business Network: A B2B Trading Partner Platform for Resilient Supply Chains

July 10, 2025

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2026 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.
body::-webkit-scrollbar { width: 7px; } body::-webkit-scrollbar-track { border-radius: 10px; background: #f0f0f0; } body::-webkit-scrollbar-thumb { border-radius: 50px; background: #dfdbdb }