
Tapping the experience of cybersecurity experts in more than two dozen companies, startup Pillar Security has codified an AI security framework that represents another solid step forward in the industry’s efforts to provide strategy, governance, and tools that ensure safe operations for AI and agents.
Those participating are a who’s who of Fortune 500 and leading AI and cloud software firms: AT&T, Corning, Philip Morris, Microsoft, Google Cloud, SAP, and ServiceNow.
The Secure AI Lifecycle Framework (SAIL) comes on the heels of other critical AI security developments and insights that aim to make data and applications secure as usage of the underlying AI technology accelerates. Those previous initiatives that have been analyzed in Cloud Wars include:
The SAIL framework lays out the AI development lifecycle and current landscape, more than 70 risks, and a set of mitigations that align with other leading frameworks – making it a comprehensive resource for business and IT leaders. SAIL is a “helpful tool for security and software practitioners building with and on AI systems,” said Aquia CEO Chris Hughes, a cybersecurity expert who contributed to the framework.
The goals of SAIL:
- Address the threat landscape by providing a detailed library of the mapped AI-specific risks
- Define capabilities and controls needed for a robust AI security program
- Facilitate and accelerate secure AI adoption while meeting the compliance requirements of AI users and their specific industries.
Core SAIL Principles
The SAIL framework (outlined in an in-depth whitepaper) “harmonizes” with and builds upon existing standards, specifically: the risk management governance of NIST AI Risk Management Framework, the management system structures of ISO 42001, vulnerability identification of OWASP’s Top 10 for LLMs, and risk identification provided by frameworks including the Databricks AI Security Framework.
“SAIL serves as the overarching methodology that bridges communication gaps between AI development, MLOps, LLMOps, security, and governance teams. This collaborative, process-driven approach ensures security becomes an integral part of the AI journey — from policy creation through runtime monitoring — rather than an afterthought,” the framework document states.
These are the seven foundational phases of SAIL — and the document lays out risks within each of the seven categories:
Plan: AI Policy & Safe Experimentation
This phase covers the imperative of aligning AI with business goals, regulatory requirements, and internal privacy requirements, as well as ethical standards. It relies on threat modeling to identify AI risks early.
In this phase, a customer is expected to define how data, models, and third-party components can be introduced — safely — into development workflows. The goal: ensure innovation is enabled, securely.
Risk examples: inadequate AI policy, governance misalignment, inadequate compliance mapping.
Code/No Code: AI Asset Discovery
To address asset sprawl and related issues including Shadow AI, this phase focuses on discovering and documenting every model, dataset, AI asset, MCP server, and tool.
SAIL advocates use of automated discovery tools to promote policy awareness and institute centralized AI governance.
Risk examples: incomplete asset inventory, Shadow AI deployment, unidentified third-party integrations.
Build: AI Security Posture Management
This phase focuses on modeling system-wide security posture and prioritizing protections based on risk so users understand how assets interact and where risks can arise. Posture management prevents reactive security approaches; it identifies chokepoints, overexposed connections, and weak configurations early.
Mitigation guidance includes promoting strict classification protocols, continuous documentation audits, and thorough validation to ensure protections are in place before systems go live.
Risk examples: data poisoning and integrity issues, model backdoor insertion or tampering, vulnerable AI frameworks and libraries.
Test: AI Red Teaming
Red teaming tests systems with adversarial approaches and simulated attacks in order to challenge assumptions, validate defenses, and identify vulnerabilities before real threats exploit them. Red teaming emulates the creativity and persistence of attackers, making it a powerful tool for exposing overlooked weaknesses.
SAIL’s recommended red teaming approach relies on standardized taxonomies, trained offensive security staff, and risk-aligned testing scenarios.
Risk examples: Untested models, incomplete red-team coverage, lack of risk-assessment process.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
Deploy: Runtime Guardrails
This phase introduces safeguards that operate in real time, including filtering inputs, sanitizing outputs, and enforcing runtime policies. Because AI behavior can shift during deployment, live monitoring and enforcement are essential for detecting anomalies, malicious inputs, or emerging risks.
To reduce this risk, SAIL advocates hardening of prompts, rigorous input validation, and adversarial testing.
Risk examples: insecure API endpoint configuration, unauthorized system prompt update/ tampering, direct prompt injection.
Operate: Safe Execution Environments
This phase focuses on creating sandboxed environments for high-risk actions. Operating AI in isolation limits blast radius if something goes wrong, especially for autonomous systems capable of executing their own code or interacting with sensitive infrastructure.
SAIL suggests mitigations to such risks, including runtime restrictions, mandatory code reviews, and strict audit trails for autonomous actions.
Risk examples: autonomous code execution abuse, unrestricted API/tool invocation, dynamic/on-the-fly dependence injection
Monitor: AI Activity Tracing
By continuously monitoring AI behavior and performance, teams can identify drift, respond to incidents, and ensure regulatory compliance for transparency and accountability.
For example, a model trained on customer reviews may slowly lose accuracy as language trends change, but without alerts or validation, this drift often goes unnoticed until trust is compromised.
SAIL mitigations include ongoing performance checks, drift detection triggers, and telemetry pipelines that support fast investigation and reliable model updates.
Risk examples: insufficient AI interaction logging, missing real-time security alerts, undetected model/drift.
Ask Cloud Wars AI Agent about this analysis