There’s a lot of buzz — and numerous product developments — around the notion of “democratizing” artificial intelligence (AI) to get this compelling technology over hurdles that have impeded adoption and operational deployments. Companies promoting a democratization of AI include (but are definitely not limited to):
- Google Cloud with Vertex AI
- Startup Noogata
- RapidMiner (recently acquired by Altair)
The focus on democratizing the technology is taking place for good reason: Data from IBM’s Institute for Business Value shows that by the end of 2022, 25% of large companies are expected to have moved beyond the pilot phase with AI projects to operationalize their work. That’s a big jump from 9% in 2020, but the downside is in the remaining 75%, who are still either piloting or considering AI projects.
There’s a lot of work to do to democratize AI, but that’s not deterring the startup neurothink, which is employing a couple of specific approaches that it says will boost the cause of taking AI mainstream:
- A low-code/no-code approach that means almost anyone can take advantage of AI models
- A focus on multi-cloud technology (specifically from VMware) that it says avoids the complexity of individual cloud providers’ AI tech and prevents vendor lock-in.
neurothink and Multi-Cloud
Minneapolis-based neurothink is seeking beta testers for its AI/ML (machine learning) platform, which it refers to as ML as a Service or MLaaS. Prospective partners have expressed interest in using the platform for applications that include mapping the underwater environment of the ocean, and building windmills in the water. Early adopters have included organizations testing autonomous vehicles and researching cures for cancer.
Those are heavy duty, compute-intensive applications where performance and reliability are critical.
neurothink says it aims to make access to the needed computing resources “radically accessible” — it intends for the ML models to be available to those without coding experience. The company’s low-code/no-code platform accelerates time-to-market of applications built with neurothink, and it also should lower the cost of bringing applications to market.
That’s important because research consistently shows that AI adoption is hindered – as found in the IBM study cited above — by limited skills and knowledge, high costs, and lack of tools or platforms.
Multi-cloud support is a core part of the company’s radically accessible vision because, while most major cloud service platforms provide AI and ML tools, those tools are complex and work in one platform doesn’t easily translate to others, so vendor lock-in is a risk.
“We have a one-stop shop for everything we need to put together our platform,” said Charles Donly, chief operating officer at neurothink. “With that, we can then make sure it’s secure all the way from the hardware into the container.”
By building on VMware, neurothink was able to launch its service rapidly and cost-effectively. Knitting together components from different vendors would have required much more time and money. “The cost would have been two to three times the value of the overall start-up investment,” says Donly. “We’ve been able to offload this integration expertise to VMware.”
neurothink is relying on several VMware platforms and tools as part of its multi-cloud strategy:
- Tanzu: cloud-native application platform for multi-cloud configurations
- Kubernetes Grid: makes it easy to install and run multi-cluster Kubernetes environments on any infrastructure
- Carbon Black: consolidates multiple endpoint security capabilities
- Bitfusion: virtualizes hardware accelerators such as graphical processing units (GPUs) to provide a pool of shared resources that support AI and ML workloads.
- Aria Operations for Applications: ensure constant availability and consistent performance for the service
In addition to low code/no code and multi-cloud functionality, neurothink aims to build a community of ML developers using its platform to collaborate — from hobbyists to university students to corporate data scientists.
neurothink had four petaflops of peak ML compute capacity so it could deliver the speed required for the most demanding applications and customers. Its multi-cloud model is key to that performance.
For more exclusive coverage of innovative cloud companies, check out Cloud Wars Horizon here: