
Microsoft is ramping up efforts to drive more robust security, as well as security operations, by tapping the power of AI agents. The company has detailed two new agentic AI initiatives to combat phishing and malware.
The first is a now-public preview of a Phishing Triage Agent that was previously disclosed. The second is a prototype that taps AI for malware detection and classification. In both cases, the work focuses on the Microsoft Defender platform.
Phishing Detection at Scale
The Phishing Triage Agent for Defender applies AI to a highly repetitive task that challenges security ops teams: handling user-submitted phishing reports. The agent triages thousands of alerts daily, typically within 15 minutes of detection.
Microsoft detailed the scale of the problem when it comes to managing phishing: Defender for Office 365 detected more than 775 million emails with malware in a 12-month period. In most organizations, more than 90% of reported emails turn out to be false positives. The company said attackers increasingly use AI to write phishing messages that appear personalized, thereby making them harder to detect.
The Phishing Triage Agent leverages large language models (LLMs) to conduct semantic evaluation of email content, URL and file inspections, and intent detection to determine if a submission constitutes phishing or a false alarm. Unlike past tools based on pre-coded logic, the agent dynamically interprets context of each email to draw a conclusion.
The agent evolves as analysts reclassify incidents and provide natural language feedback explaining why a particular verdict was correct or not. In response, the agent refines its reasoning and adapts to the organization’s specific needs, patterns, and nuances.
“This AI-powered agent autonomously triages user-reported phishing emails, acting as force multiplier to security teams helping them scale their response and reduce repetitive investigation work,” said Microsoft Corporate Vice President Dorothy Li in a LinkedIn post about the agent.
Organizations that meet the prerequisites can join the Phishing Triage Agent Public Preview, available through a trial directly in the Microsoft Defender portal.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
AI-Powered Malware Detection
Microsoft also detailed a prototype AI agent that analyzes and classifies software autonomously as it seeks out malware. The prototype automates a robust process of reverse engineering a software file to determine whether the software is malicious. The prototype was developed through collaboration between Microsoft Research, Microsoft Defender Research, and Microsoft Discover & Quantum.
The Microsoft Defender platform scans more than one billion active devices monthly; that routinely requires manual review by experts. Unlike other AI applications in security, Microsoft said, AI must make judgment calls without definitive validation beyond expert review, but many software behaviors don’t clearly indicate whether a sample is malicious.
The resulting ambiguity requires analysts to investigate each sample while building evidence to determine whether the software is malicious or benign. This creates major automation and scalability challenges.
The new prototype, dubbed Project Ire, uses specialized reverse-engineering tools to conduct low-level binary analysis and high-level interpretation of code. Evaluation begins with a triage process that identifies the file type and structure. The LLM calls specialized tools to identify and summarize key functions, contributing to an auditable trail that supports secondary review by security teams.
A validator tool cross-checks claims in the report against the chain of evidence that’s been created. A final report classifies the sample as malicious or benign.
In one early evaluation, the classifier correctly identified 90% of all files and flagged just 2% of benign files as threats, with this low false-positive rate demonstrating strong potential for deployment in security operations.
Based on this and other evaluations, the Project Ire prototype will be leveraged inside Microsoft’s Defender organization for threat detection and software classification, the company said.
These latest advances by Microsoft demonstrate clearly the power of AI to help fend off attackers, especially when it comes to high-volume activities that challenge security teams to achieve the scale required for effective defenses. They are another indication that the security vendor community is racing at least as fast as the attackers to deploy AI, in this case for protection of corporate assets.
Ask Cloud Wars AI Agent about this analysis