
Welcome to the AI Agent & Copilot Podcast, analyzing the latest AI Copilot and agent developments from Microsoft and its partners, delving into customer use cases, and exploring how AI plus the Cloud helps customers reimagine their business. In this episode, Tom Smith speaks with AIS Chief Technology Officer Brent Wodicka, following the Microsoft partner firm’s recent release of a report on AI literacy and the future of work.
Highlights
Purpose of the AIS Report
Brent explains the need for the report, emphasizing the importance of AI literacy (access his presentation on the topic at AI Agent & Copilot Summit here) for successful engagements and value extraction from AI technology. The report aims to provide insights into AI literacy, data management, and the risks associated with AI agents.
Data Management and Chunking
He explains the concept of chunking, which involves breaking up content to ground AI model responses in specific domains or business contexts. Chunking helps optimize AI systems by providing relevant and up-to-date data, reducing the likelihood of hallucination or latency. Chunking ensures that AI systems receive the most relevant data.
Risks Associated with AI Agents
Since agents perform goal-directed behavior, multi-step reasoning, and use tools beyond prompt and response flows, they present unique risks including agent collision, where multiple agents may conflict or undermine each other. And they introduce a broader attack surface. Wodicka provides an example of a coding agent that creates its own execution environment and forks repositories, increasing the attack surface and the need for robust security measures.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
Operationalizing AI: Key Considerations
Wodicka emphasizes the importance of avoiding a black-box approach and ensuring close collaboration between tech and business teams. Practical on-the-job training is crucial for users to effectively adopt AI systems. The quality of data is a critical factor, as poor data quality can hinder the transition from POC to production.
Balancing Non-Critical and Critical Workflows
Companies should pursue both low-risk, non-critical workflows to build confidence as well as high-impact, critical workflows to demonstrate value. The balance should be struck to ensure both initial success and ongoing motivation to continue improving and scaling AI implementations. High-impact use cases can rally employee support and drive adoption.
Measuring Impact
Metrics help ensure that AI implementations are delivering value and provide a basis for continuous improvement and scaling. He highlights the importance of defining baseline metrics at the start of a project and iterating on these metrics throughout the project lifecycle. Key metrics include cycle time, accuracy, cost per successful task completion, and end-user value or satisfaction.
Ask Cloud Wars AI Agent about this analysis