The rapid pace of GenAI development is leading to an increase in partnerships, co-creation initiatives, and co-selling opportunities between software giants and others in the AI ecosystem. Case in point: Snowflake has joined forces with Meta to host and optimize the latter’s newest and most capable open-source family of large language models (LLMs), Llama 3.1.
Meta announced the launch of the Llama 3.1 model family in late July this year. Meta describes Llama 3.1 405B as “the first openly available model that rivals the top AI models when it comes to state-of-the-art capabilities in general knowledge, steerability, math, tool use, and multilingual translation.”
Optimized by Snowflake
Coinciding with the launch of Llama 3.1, Snowflake announced that it would be hosting the LLMs in Snowflake Cortex AI. The aim is to make it easier for organizations to use open-source models to build scalable AI applications.
Among the models being offered is Llama 3.1 405B, Meta’s largest and most capable open-source LLM. Snowflake’s AI Research Team has optimized the model and open-sourced its Massive LLM Inference and Fine-Tuning System Optimization Stack. Now, users within Cortex AI can fine-tune 405B with just a single processing node, dramatically cutting costs.
“Snowflake’s world-class AI Research Team is blazing a trail for how enterprises and the open source community can harness state-of-the-art open models like Llama 3.1 405B for inference and fine-tuning in a way that maximizes efficiency,” said Vivek Raghunathan, VP of AI Engineering, Snowflake.
Along with the announcement regarding Llama 3.1, Snowflake also revealed the general availability of Snowflake Cortex Guard. The feature leverages Meta’s Llama Guard 2, helping secure applications built in Cortex AI using Llama 3.1, as well as LLMs from AI21 Labs, Google, Mistral AI, Reka, and Snowflake.
Democratizing GenAI development
Snowflake’s partnership with Meta enables its customers to quickly and easily access, fine-tune, and deploy Llama 3.1 in its AI Data Cloud. “By harnessing Meta’s Llama models within Snowflake Cortex AI, we’re giving our customers access to the latest open source LLMs,” said Matthew Scullion, CEO and co-founder of Matillion, which offers the Data Productivity Cloud. “The upcoming addition of Llama 3.1 gives our team and users even more choice and flexibility to access the large language models that suit use cases best and stay on the cutting-edge of AI innovation. Llama 3.1 within Snowflake Cortex AI will be immediately available with Matillion on Snowflake’s launch day.”
Raghunathan said Snowflake isn’t just making Meta’s models available directly to customers through Snowflake Cortex AI. “We’re arming enterprises and the AI community with new research and open source code that supports 128K context windows, multi-node inference, pipeline parallelism, 8-bit floating point quantization, and more to advance AI for the broader ecosystem.”
Closing thoughts
Earlier this year, we reported on the release of Snowflake’s Open-Source Arctic LLM. At the time, Sridhar Ramaswamy, CEO of Snowflake, said, “By delivering industry-leading intelligence and efficiency in a truly open way to the AI community, we are furthering the frontiers of what open-source AI can do. Our research with Arctic will significantly enhance our capability to deliver reliable, efficient AI to our customers.”
The combined effect of Llama 3.1’s availability on Cortex AI and the open-sourcing of Snowflake’s state-of-the-art fine-tuning and inferencing systems further propel Snowflake to the forefront of open-source AI development. As the AI industry continues to mature, this position is likely to become increasingly contested.
Why? Because more and more tech companies are working with open-source communities to provide what customers desire: world-class AI infrastructure from a trusted vendor with the flexibility to adapt AI applications to specific business use cases.
Ask Cloud Wars AI Agent about this analysis