
Business leaders making early AI bets and then iterating rapidly to optimize those opportunities “are doing much better” than executives who timidly watch from the sidelines, according to one of the world’s most-knowledgeable experts on AI in the enterprise.
Sharing the Snowflake Summit keynote stage last week with Snowflake CEO Sridhar Ramaswamy, OpenAI CEO Sam Altman offered a number of compelling — and bullish — perspectives on the blazing pace of AI innovation and evolution, the business benefits it is helping create, and the pros and cons of being a first-mover versus an indecisive “let’s wait and see” follower.
While the Snowflake event featured an expansive array of new-product announcements and customer/partner stories, the Altman/Ramaswamy keynote was the conference’s highlight because of Altman’s presence at the advanced cutting-edge of not only AI research but also enterprise AI adoption.
Below, I’ve pulled out what I believe are the key excerpts of the discussion between Ramaswamy and Altman (video of the full Ramaswamy keynote, including the segment featuring Altman, can be viewed here). The discussion was a thought-provoking highlight within a three-day event that will surely extend the market momentum for Snowflake, one of the fastest-growing companies on the Cloud Wars Top 10. For more detail on the company and its AI and data new-product blitz at the event, please check out “Snowflake Follows 34% RPO Spike with AI Data Cloud New Product Blitz.“
Some powerful insights from Altman:
1. How fast should CEOs move on AI? “I think just do it — there’s still a lot of hesitancy in the models are changing so fast, and there’s all this reason to wait for the next model, or you’re going to wait and see if this is going to shake out this way or that, or if you should build,” Altman said at the top of the conversation.
“But as a general principle of technology, when things are changing quickly, the companies that have the quickest iteration speed, and sort of make the cost of making mistakes the lowest and the learning rate the highest, win. And certainly what we’re seeing with enterprises and AI is the people that are making the early bets and iterating very quickly are doing much better than the people that are waiting to see how it’s all going to shake out.”
Snowflake’s Ramaswamy heartily concurred: “I can’t agree with that more, and the thing that I’ll add on is curiosity. I think there’s so much that we take for granted about how things used to work that just aren’t true anymore….OpenAI and Snowflake have made the cost of experimenting very, very low — you can run lots of little experiments, get value from them, and build on that strength.”
2. Today’s AI technology and capability is vastly superior to last year’s. Commenting on his belief that today’s AI race will go to the bold and the speedy, Altman added, “Interestingly, I wouldn’t have said quite the same thing last year. I would have said the same thing to a startup last year, but to a big enterprise, I would have said something like, ‘You know, you can experiment a little bit, but this (AI technology) is maybe not totally ready for production use in most cases.’
“And that has really changed,” Altman continued. “Our enterprise business has gone like this (points hand and fingers straight up), and we talk to big companies who are now really using us for a lot of stuff and say, ‘What’s so different?’ And we’re like, ‘Well, did it just take a while to figure it out?’ And they reply that that was part of it, but it now just works so much more reliably — ‘It can do all these things that I just didn’t think were going to be possible.'”
3. The rapid increase in model performance will continue. “It does seem like sometime over the last year, we hit a real inflection point for the usability of these models,” Altman said. “Now, an interesting question is what we’ll say differently next year. And next year, I think we’ll be at the point where you can not only use a system to sort of automate some business processes or build these new products and services, but you’ll really be able to say, ‘I have this hugely important problem in my business. I will throw a ton of compute at it to try to solve it, and the models next year will be able to go figure out things that teams of people on their own just can’t do.’ And the companies that have gotten experience with these models are well-positioned for a world where they can say, ‘AI system, go redo my most critical project. And here’s a ton of compute — think really hard, and just figure out the answer.’ People who are ready for that — I think we’ll have another big step change next year.”
4. Are your managers prepared to manage a very new type of work? “So you hear from companies that are building agents to automate most of their customer support, or they’re up on sales or any number of other things. And you hear people talk about how their job now is to assign work to a bunch of agents: look at the quality, figure out how it fits together, give feedback. And it sounds a lot like how they work with a team of still relatively junior employees. And that’s here today,” Altman said. “It’s not evenly distributed yet, but that’s happening. I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge or can figure out solutions to business problems that are kind of very non-trivial.
“Right now, it’s very much in the category of, okay, if you’ve got some repetitive cognitive work, we can automate it at a kind of a low level on a short time-horizon. And as that expands to longer time-horizons and at higher and higher levels, at some point you get an AI scientist, an AI agent that can go discover new science, and that will be kind of a significant moment in the world.”
Note from Bob: I’m nominating “that will be kind of a significant moment in the world” for the prize of Best Understatement of the Year.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
5. The hunt for AGI — and why it doesn’t really matter. “If we could go back exactly five years —that was just before we launched GPT3, and so the world had not yet seen a good language model — and if you could go back to that moment and show someone today’s version of ChatGPT, I think most people would say that’s AGI for sure,” Altman said.
“People are great at adjusting our expectations, which I think is a wonderful thing about humanity. I think mostly the question of what AGI is doesn’t matter. It’s a term that people define differently — the same person often will define it differently. The thing that matters is that the rate of progress that we’ve seen year over year the last five years should continue for at least the next five and probably well beyond that.
“And so whether you declare the AGI victory in 2025 or 2026 or 2028, and whether you declare the superintelligence victory in 2028 or 2030 or 2032 is way less important than this one long, beautiful, shockingly smooth, exponential arc [emphasis added]. All of that said, to me, a system that can either autonomously discover new science or be such an incredible tool to people that our rate of scientific discovery in the world quadruples or something like that, that would satisfy any test I could imagine for an AGI.”
Ramaswamy jumped in with his some equally philosophical considerations. “I think it becomes a matter of debate,” Ramaswamy said. “Like Sam is saying I think sometimes it’s also a philosophical question that I would liken to, I don’t know, does a submarine swim? At one level it’s absurd, but of course it does. And so I see these models as having incredible capabilities that will likely cause any person looking at what things are going to be like in 2030 to just declare, ‘That’s AGI.’ So to me, it’s the rate of progress that is truly astonishing, and I sincerely believe that many great things are going to come out of it.”
6. Buckle up tight — the next two years will make the last two seem boring. “Yeah, the models over the next year or two are going to be quite breathtaking really. There’s a lot of progress ahead of us, a lot of improvement to come. And like we have seen in the previous big jumps, you know, from GPT3 to GPT4, businesses can just do things that totally were impossible with the previous generation of models,” Altman said.
“And so what an enterprise will be able to do — just, like, give it your hardest problem. If you’re a chip-design company, you’ll say, ‘Go design me a better chip than I could have possibly had before.’ If you’re a biotech company, trying to cure some disease, you’ll just say, ‘Go work on this for me.’
“And that’s not so far away! The ability of these models to understand all the context you want to possibly give them, connect to every tool, every system, whatever, and then go think really hard, like really brilliant reasoning and come back with an answer and have enough robustness that you can trust them to go off and do some work autonomously like that — not so long ago, I don’t know if I thought that would feel so close. But now, it feels really close.”
7. What would you do with 1000X more compute power than you have today? Moderator Sarah Guo asked that question, and Altman immediately gave a reply that’s both mind-bending and hilarious. “Maybe the real answer is I would ask it to work super hard on AI research to figure out how to build much better models, and then ask that much better model what we should do with all the extra 1000X compute,” Altman said, drawing a big laugh from the crowd of about 15,000 people.
“If you let the model reason more, if you try more times on a really hard problem, you can already get much better answers. And a business that says, ‘I’m going to throw 1000X more compute at every problem’ would get some amazing results. Now, you’re not literally going to do that and you don’t have 1000X compute. But the fact that that is now possible today does point to an interesting thing people could do today, which is say, ‘Okay, I’m going to really treat this as a power law and be willing to try a lot more compute for my hardest problems or most valuable things.'”
Building on that, Ramaswamy cited the Arnold Project, which he likened to the massive DNA-sequencing project of about 20 years ago. “The Arnold Project is about figuring out RNA expression,” Ramaswamy said. “It turns out they control pretty much how proteins work in our body. And the breakthrough there, knowing exactly how RNA controls DNA expression, it’s likely to solve a ton of diseases and put humanity forward so much more. That would be a cool use of basically the equivalent of the DNA project done with with language models. That’d be a pretty cool outcome if you have a lot of compute to throw at something inspiring.”
Final Thought
So here’s the plan as seen by arguably the most-knowledgeable business-AI executive in the world:
- Be bold, go fast.
- Be supremely confident that AI technology will get much better very rapidly.
- Ask yourself if, during a time like this, it’s a good idea to bide your time.
- Think very hard about the massive changes coming to your workforce.
- Don’t get caught up in AGI fever.
- Prepare your organization to begin operating in a business environment unlike anything you —or anyone else — has ever experienced.
- Think big, and enjoy the adventure!
Ask Cloud Wars AI Agent about this analysis