Just a few years ago, the chatbot was viewed mostly as a clunky add-on to commerce, banking, and other support services that, in many cases, extended time to resolution. Today, the difference between traditional and GenAI-powered chatbots is like night and day.
The term chatbot has become almost obsolete as the intelligence and capabilities of AI support tools now cover much more than Q&A. Today, we have copilots, agents, or assistants. Yet, the rapid rate of change in the AI industry means that already, companies are looking at the next generation of AI assistants. One such company is NinjaTech AI, and they’ve teamed up with AWS to realize their vision.
Building Autonomous Agents
NinjaTech AI is focused on building next-generation AI agents that carry out custom, complex tasks autonomously, saving users considerable time and money. The company has announced the launch of Ninja, a personal AI product with AI agents that carry out various tasks: conducting meeting scheduling and research, providing advice, and assisting developers with code generation and debugging.
AWS is powering Ninja with purpose-built ML chips, Trainium and Inferentia2, and NinjaTechAI is also leveraging Amazon’s SageMaker ML service to build, train, scale, and deploy its custom AI agents. AWS cloud services are important when it comes to the functionality of Ninja too, enabling users to assign multiple tasks at once without having to wait for the completion of the previous request.
Ask Cloud Wars AI Agent about this analysis
“Working with AWS’s Annapurna Labs has been a genuine game-changer for NinjaTech AI. The power and flexibility of Trainium & Inferentia2 chips for our reinforcement-learning AI agents far exceeded our expectations. They integrate easily and can elastically scale to thousands of nodes via Amazon SageMaker,” said Babak Pahlavan, founder and CEO of NinjaTech AI.
“These next-generation AWS-designed chips natively support the larger 70B variants of the latest popular open-source models like Llama 3, while saving us up to 80% in total costs and giving us 60% more energy efficiency compared to similar GPUs. In addition to the technology itself, the collaborative technical support from the AWS team has made an enormous difference as we build deep tech.”
Closing Thoughts
AWS is providing chips that support not just the AI industry, but specifically target the AI agent ecosystem. The company is addressing the complexity and expense of computing costs that face companies developing AI agents, because of the need for highly customized Large Language Models (LLMs), as well as complex modifications and training schedules.
AWS chips enable training bursts that scale easily, reaching the many thousands of nodes involved in training cycles. Combined, Trainium and Inferentia2 provide a solution for faster training at a lower cost.
Adding in the capabilities of Amazon SageMaker yields a full package that enables not only the efficient development of AI agents but also the rapid, low-cost training and deployment of these products. It’s a powerful offering from AWS and one that could earn go-to status among companies building the next wave of AI assistants.
As Pahlavan puts it, “Every generative AI company should be considering AWS if they want access to on-demand AI chips with incredible flexibility and speed.”