In episode 92 of the AI/Hyperautomation Minute, Toni Witt reports on the new features and capabilities of the recently released GPT-4, the next generation of GPT by OpenAI.
This episode is sponsored by Acceleration Economy’s Digital CIO Summit, taking place April 4-6. Register for the free event here. Tune in to the event to hear from CIO practitioners discuss their modernization and growth strategies.
Highlights
00:32 — OpenAI recently released GPT-4, which is the next generation after GPT-3 and GPT-3.5 — the underlying model behind ChatGPT. This model has been in development for two years. Additionally, the training data only runs up to 2021.
01:26 — An important concept with GPT-4 is that it’s multimodal, which OpenAI explains on its website. This new model can take in text and image prompts and develop an output based on the context provided. For example, users can input an image of ingredients and ask the program to suggest a recipe using those items.
02:06 — Another example is that users could prompt GPT-4 by uploading a drawing that outlines a website design and asking it to write a front-end codebase that produces the website that reflects the drawing. This will be helpful for designers who can’t easily turn their concepts into code.
Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.
02:40 — Developers spent half a year making GPT safer. “They say that it’s 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT 3.5,” Toni reports, citing data from the OpenAI site.
02:58 — GPT-4 may still be facing similar issues as the previous versions, however. One issue is hallucination, which is when it produces a false response that is written in a convincing way so it appears accurate. Further, it lacks knowledge past 2021; the training data only goes up to that year due to the large costs of retraining the model and including real-time data, Toni emphasizes.
03:31 — General advisory or financial advisory services are a couple of major use cases in which GPT-4 can be applied. Toni shares how he used ChatGPT as he was doing his taxes this year, as this next iteration has improved its math skills. “Don’t blindly take the number that they spit out and put it on your return, but what you can do is ask it, ‘Walk me step-by-step through the process,'” he says.
04:15 — Tutoring, education, and consulting are other areas where GPT can provide personalized responses, which Toni considers to be “really powerful” applications. This tool is already integrated into various sites and products, such as the online education platform Khan Academy.
04:44 — Transparency remains a problem, as it wasn’t disclosed how training data was selected.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: