
Welcome to the Cloud Wars Minute — your daily cloud news and commentary show. Each episode provides insights and perspectives around the “reimagination machine” that is the cloud.
In today’s Cloud Wars Minute, I explore how AWS is setting the pace in enterprise AI by rapidly deploying Meta’s Llama 4 models.
Highlights
00:04 — AWS has made Meta’s latest models — Llama 4 Scout 17B and Llama 4 Maverick 17B —available through Amazon SageMaker JumpStart, with availability in Amazon Bedrock coming soon. Both models boast multimodal capabilities, meaning they’ve been built to process and understand text and images simultaneously and have extended context windows.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
00:46 — The models adopt a Mixture of Experts, or MoE, approach — only drawing on specific components for specific tasks — which dramatically boosts performance and efficiency. AWS stands out for the speed and consistency with which it enables customers to access best-of-breed models.
01:25 — There is no delay. Once a model is released and proves effective in specific use cases, AWS makes it available to its customers immediately. This responsiveness is becoming an expectation for customers, and it gives AWS a strong competitive advantage.