In this Innovation Profile that was initially featured at our Generative AI Digital Summit, practitioner analyst Wayne Sadin covers the features of Redis, an Acceleration Economy Data Modernization Top 10 shortlist company.
To hear practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, and cybersecurity, make sure to register for your on-demand pass to Acceleration Economy’s Generative AI Digital Summit.
Highlights
00:18 — Redis provides features that can improve the performance of large language models (LLMs), like ChatGPT, and other artificial intelligence (AI) and machine learning (ML) workloads.
00:35 — Context length matters when it comes to LLMs, as this form of generative AI creates outputs by using the context from recent conversations.
00:54 — Generative AI model builders can use Redis as a vector database to cache historical user interactions, providing an adaptive prompt creation mechanism based on the current context.
Which companies are the most important vendors in data? Check out the Acceleration Economy Data Modernization Top 10 Shortlist.
01:06 — “Natural-sounding conversations depend on data retrieval speed, which is where the high performance of Redis vector database makes sense,” Wayne explains. The Redis ChatGPT Memory Project demonstrates the value of expanded context.
01:24 — Machine learning models are real-time models that require reliable low latency and scalable high throughput data serving. “Redis offers linear scaling, sub-millisecond latency, along with the fault tolerance needed to create and deploy machine learning apps at scale,” says Wayne.
01:51 — One real-world example is iFood, an online food ordering and delivery service. It maintains an 80% share of the resilient food delivery market. Redis Enterprise Cloud scales as iFood grows. Redis on Flash meets the performance that customers demand.
02:38 — Ekata, a Mastercard company that’s a leader in real-time identity verification and fraud detection, is another example. It requires extensive use of AI and ML models to improve its accuracy and reduce latency. Ekata is able to save money on DRAM storage with Redis on Flash.
03:27 — Today’s AI, ML, and generative AI models need significant amounts of computing power and data; these needs are growing fast. Redis Enterprise Cloud and Redis on Flash provide CIOs with a powerful tool to optimize speed, cost, and other attributes.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: