In episode 20 of The Cutting Edge Podcast, Leonard Lee discusses generative artificial intelligence (AI) and specifically what edge computing can do for generative AI (gen AI) applications.
Highlights
00:43 — Generative AI is an emerging area of technology that has been creating a craze since the release of ChatGPT, Leonard says.
02:03 — Gen AI services, including Open AI projects, DALL-E, Stable Fusion, and Midjourney, have spurred hundreds of new applications in a matter of months. “While these generative AI applications can do some cool stuff and demonstrate promise, they’re also quite limited in their capabilities simply because the large language models (LLMs) have inherent limitations,” Leonard says.
02:39 — These limitations include hallucinations, or “confidently presenting fabricated outcomes,” consumer privacy, and enterprise confidentiality issues. These are raising concerns among chief information security officers (CISOs) and CIOs.
04:11 — Edge-native technologies will “present an opportunity for more privacy-first and confidential gen AI systems and architectures,” Leonard says.
05:07 — “Edge gen AI could…helps enterprises and consumer software companies leverage the power of this emerging technology in a secure and confidential way while minimizing exposure to public generative AI tools that pose an unknown risk to enterprises and consumers.”
05:33 — Leonard believes for gen AI applications to yield business value, they must be “managed and curated in a purposeful way that leverages the specific capabilities and functions that these tools will provide.” They must also “be secured and preserve organizational and personal confidentiality.”
06:03 — For the C-suite, generative AI holds a lot of excitement — and a lot of concern because of the potential threat to enterprise confidentiality and security.
06:38 — Legal teams must also start developing postures, policies, and protocols to deal with the bevy of legal concerns that gen AI poses.
06:51 — Organizations may wish to hire a chief privacy officer “who considers and evaluates the legal ramifications and recourse and fosters responsible AI inside and outside the organization.”