
A set of recent enhancements to Microsoft Copilot Studio include new multi-agent orchestration features that apply to Fabric and Microsoft 365 Agents; they also include support for the Agent2Agent (A2A) protocol.
Additional Copilot Studio updates include prompt-building improvements that integrate prompt iteration directly in individual agents’ Tools tab, as well as safeguards against harmful content.
Multi-Agent Updates
New multi-agent capabilities – generally available in the near term – can be used to connect and orchestrate agents across an ecosystem. These updates apply to Microsoft Fabric, Microsoft 365 Agents SDK, and A2A communication—all three of which are detailed below.
With Fabric multi-agent support, Copilot Studio-developed agents can work with Fabric agents to reason over enterprise data within their Fabric data estates, without the need to tap engineering resources for data-intensive business requirements. Instead of working with limited or disconnected data, these agents will be able to operate with full business context and provide the most relevant outputs.
Meantime, using the new Microsoft 365 Agents SDK, teams can orchestrate Copilot Studio agents along with agents built for Microsoft 365. Instead of recreating the same logic (examples: retrieving data or applying business rules) across multiple agents, users can combine and reuse existing capabilities. This makes it easier to compose workflows using already-built agents rather than starting from scratch.
With A2A support, Copilot Studio agents can directly communicate with, and delegate work to, other agents (first, second, or third-party agents) using the A2A protocol that provides broad-based connectivity. That’s an important consideration given that no one vendor or framework can be the sole provider of AI agents.
Better Prompts, More Controls
Microsoft is also adding features to simplify and protect prompts within Copilot Studio-built agents. A new Immersive Prompt Builder, now generally available, brings prompt editing directly into each agent’s Tools tab. A user can update instructions, switch models, add inputs or knowledge, and test changes—all in one place. This is in contrast to breaking context and using different tools every time a user or developer wants to refine an agent’s behavior; now those users can iterate while staying grounded in the agent as it’s being built or refined.
Immersive prompt building should be particularly impactful where prompt behavior is tied to domain knowledge and policy nuance. For example, a team building an agent to support healthcare documentation will need to refine instructions, swap in knowledge sources for optimal quality, and test outputs for accurate terminology. Doing that from a single workspace, in context of the agent, should accelerate iteration in the process of delivering a production-ready agent.
Copilot Studio has also added content moderation settings for prompts, giving users more control over harmful content on managed models; this includes the ability to lower sensitivity settings to help unblock legitimate scenarios in industries like healthcare, insurance, and law enforcement, where default settings may be overly restrictive relative to the content that a model is processing.
The Prompt Tool in Copilot Studio now supports Anthropic Claude Opus 4.6 and Claude Sonnet 4.5; those models are in paid experimental preview in the United States. The Claude options gives users more choice in matching the right model to their prompts. Support for these Claude models further extends Microsoft’s support for – and reliance on – Anthropic technology as options within its tools.
Additional Updates
Microsoft said it has released the following, additional updates to Copilot Studio:
- Evaluation automation APIs, which make it easier to run agent evaluations programmatically and integrate quality checks into continuous integration and continuous delivery (CI/CD) workflows
- Agents for Microsoft Teams meetings can now access real-time meeting transcripts and group chat to support scenarios including answering questions during a meeting
- Support for Model Context Protocol (MCP) and OpenAI Apps SDK expand how agents connect to external work apps, making it easier to integrate business systems and enable agents to take action across a broader ecosystem
- Additional model support, including xAI’s Grok 4.1 Fast and OpenAI’s GPT-5.3 Thinking and GPT-5.4 Instant, giving customers more options as they optimize experiences for speed, cost, and capability.
For more details on the latest Copilot Studio updates, check out this Microsoft blog.
More Copilot Studio and Multi-Agent Insights:
- Copilot Studio Brings AI Tests Into Alignment With Human Evaluations
- Microsoft Framework Supports Diverse Models, Agents to Drive Complex Workflows
- Copilot Cowork Marshals Corporate Intelligence, AI to Execute Complex Tasks




