In recent months, we’ve seen a great deal of activity around agentic AI. Most commonly, this activity has focused on the AI agent or agent suite — the end product. However, the underlying technologies that power these advances are critical.
To that end, Google has unveiled its latest advancement in supporting development of agentic models. These are models that are adapted to agentic AI tasks with more environmental awareness, foresight, and capabilities for automated actions. Specifically, Gemini 2.0 is described as an “AI model for the agentic era.” Here’s what to expect.
Gemini 2.0 Flash
Google has released an experimental version of Gemini 2.0 Flash, the first in the Gemini 2.0 model family. “Today we’re excited to launch our next era of models built for this new agentic era: introducing Gemini 2.0, our most capable model yet,” said Google and Alphabet CEO Sundar Pichai. “With new advances in multimodality — like native image and audio output — and native tool use, it will enable us to build new AI agents that bring us closer to our vision of a universal assistant.”
With Gemini 2.0 Flash, Google is building on the progress made with 1.5 Flash. In addition to supporting multimodal inputs including images, video, and audio, 2.0 Flash supports multimodal outputs such as natively generated images combined with text and multilingual audio from steerable text-to-speech (TTS).
In terms of how these enhancements can boost the capabilities of agentic AI, Gemini 2.0 Flash boasts:
- Native user interface action-capabilities
- Multimodal reasoning
- Long context understanding
- Complex instruction following and planning
- Compositional function-calling
- Native tool use
- Improved performance
Collectively, these capabilities enable better agentic AI. Developers can access Gemini 2.0 Flash as an experimental model through the Gemini API in Google AI Studio and Vertex AI.
AI Copilot Summit NA is an AI-first event to define the opportunities, impact, and outcomes possible with Microsoft Copilot for mid-market & enterprise companies. Register now to attend AI Copilot Summit in San Diego, CA from March 17-19, 2025.
General Availability of Trillium
Google recently announced the general availability of Trillium, its latest Tensor Processing Units (TPUs). Trillium TPUs were instrumental in training Gemini 2.0. They are a key component of Google Cloud’s AI Hypercomputer.
With this availability, Google Cloud customers can now utilize Trillium TPUs to boost their own AI initiatives. “At AI21, we constantly strive to enhance the performance and efficiency of our Mamba and Jamba language models,” said Google Cloud customer Barak Lenz, CTO, AI21 Labs. “As long-time users of TPUs since v4, we’re incredibly impressed with the capabilities of Google Cloud’s Trillium.
“The advancements in scale, speed, and cost-efficiency are significant. We believe Trillium will be essential in accelerating the development of our next generation of sophisticated language models, enabling us to deliver even more powerful and accessible AI solutions to our customers.”
Closing Thoughts
Google has always been recognized as a company that delivers groundbreaking technology advances. The development of Gemini as an infrastructure tool for agentic AI is, in many ways, a logical next step in the company’s journey.
Architectural advancements like this will continue to enable Google to stand out among its competitors. When you consider Google Cloud and the opportunities its cloud services provide in delivering these new solutions, Google has positioned itself strongly in the field of agentic AI.