In episode 105 of the AI/Hyperautomation Minute, Toni Witt provides clarity behind generative AI, its underlying technology — the GPT (generative pre-trained transformer) machine learning model — and how it’s evolving.
This episode is sponsored by Acceleration Economy’s Generative AI Digital Summit. View the event, which features practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, cybersecurity, and more, by registering for your free on-demand pass.
Highlights
00:26 — While there are many conversations about generative AI, those outside of the tech field may still have a misunderstanding of the underlying technology and how it’s evolving.
01:03 — Toni clarifies that ChatGPT is an web-based tool that gives access to GPT-3, which is the underlying machine learning model. GPT-3 is a word predictor. It’s a form of deep learning with capabilities that are essentially a subset of what machine learning and AI can do.
01:37 — Machine learning started with prediction and classification. “Most AI applications that give returns to companies are these classification or predictor models,” Toni explains. The Netflix recommender algorithm is an example of this, as it uses data from previous movies and shows that you’ve liked in the past to recommend what to watch next.
02:12 — GPT-3 is a transformer model. “There’s a pretty big debate going on whether these transformer models are going to be the ones that reach what you might call AGI, or artificial general intelligence, that basically matches the intelligence level of a human,” Toni says.
02:57 — Sam Altman, CEO of OpenAI, pointed out a trend that there will be “base-level models.” The GPT series is already an indication that models will help train other models. “Think of it like a tech stack,” says Toni.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: