
In part one of this special report on AI security, I detailed findings from Anthropic research that summarized many ways attackers are exploiting AI to make it easier to create, launch, and scale attacks, while those attackers are also using AI to accelerate stolen data analysis, false identity creation, and more.
In this second installment, I’m sharing details on one operation’s use of Anthropic Claude to develop and market malware under a ransomware-as-a-service (RaaS) model. In the research and disclosure of details, Anthropic security researchers emphasized that the operation has capitalized on AI to remove traditional technical barriers — and skills for malware development.
That’s a scary prospect for countless companies that represent potential ransomware targets, so it’s important to review closely the details provided by Anthropic, which in this case can serve as a proxy for any software developer that finds its products being misused for financial or other gains.
Details on this and other attacks were recently published by Anthropic with the goal of helping the AI “supply chain” harden its defenses against these attackers.
Malware Built Without Coding Expertise
Anthropic’s analysis of the malware/ransomware effort opens with a stark acknowledgment: “The most striking factor is the actor’s seemingly complete dependency on AI to develop functional malware” since the operator “does not appear capable of implementing encryption algorithms, anti-analysis techniques, or Windows internal manipulation without Claude’s assistance.”
Technical ineptitude notwithstanding, the group is marketing ransomware packages that include:
- Core encryption capabilities including a file encryption system, key management, and target selection that details fixed drives and network shares with prioritization of user directories
- Anti-analysis and invasion techniques including bypass of API hooking (used to intercept or modify API behavior), obfuscation of suspicious API names, and anti-debugging techniques designed to detect and evade analysis
- Performance and reliability features including multi-threading, dynamic resource management, and error handling
- Delivery and persistence features including the ability to load malware into legitimate processes and a modular architecture that lets components function independently or on an integrated basis.
- Anti-recovery features including shadow copy deletion and targeting of mapped network resources — beyond local drives
- Infrastructure including a decryption utility for ransom payment verification and RSA key generation
Above and beyond the “democratization” of cybercriminal commercial work due to lowered barriers to entry, Antrhopic noted that detection and attribution of malware is more challenging because code that’s been developed reflects AI patterns and outputs, rather than human patterns.
Anthropic also raises alarms that the RaaS model increases the potential for significant financial and operational impacts across industries and could portend an “unprecedented expansion of ransomware operations.”

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
Commercialization and Anthropic’s Response
Anthropic notes that the malware “developer” operates through a .onion domain (providing anonymity for operators and users) with a an encrypted, highly secure ProtonMail address. The operator actively markets across multiple forums with video demonstrations and claims that its products are for education and research; it simultaneously advertises on criminal forums. The ransomware packages are being marketed for $400 to $1,200.
Researchers said Antrhopic has responded to the ransomware operation by banning the associated account, while also implementing new methods to detect malware uploads, modification, and generation on Claude.
At least as important, the company publication of this and several other misuses of its AI assistant further the AI industry’s knowledge and understanding of attack methods and tools being exploited. In so doing, Anthropic has added an important resource for vendors, partners, and customers to understand the evolving threat landscape in the AI Era and taken a step to help them fortify their defenses proactively.
Ask Cloud Wars AI Agent about this analysis