In episode 84 of the AI/Hyperautomation Minute, part 1 of a special extended minute, Aaron Back hosts Sudha Ranganathan of LinkedIn for a conversation on how artificial intelligence (AI) is impacting the creative space and three dependencies that influence it.
This episode is sponsored by Acceleration Economy’s Digital CIO Summit, taking place April 4-6. Register for the free event here. Tune in to the event to hear from CIO practitioners discuss their modernization and growth strategies.
Highlights
01:41 — With artificial intelligence (AI) “taking the world by storm,” Aaron asks Ranganathan for her thoughts on AI disruption in creative spaces. She shares her perspective from operating in marketing and talent management. Ranganathan thinks that we will see “a slow and steady influx of AI into a lot more,” noting that ChatGPT has not “spoken enough about all of the potential use cases yet” as an example.
04:00 — “If this topic really clear about what you can do with it, imagine how much more is coming our way,” Ranganathan raises. However, she suggests there are some dependencies that influence how deep and far it goes.
04:11 — The first is to remember that all AI is based on training data. Ranganathan elaborates, “The extent to which it can take over somebody’s job depends on the extent to which what they’ve done so far can be fed into the AI, so it can be trained to do that similar job in the future.”
04:27 — The second dependency is that “the more confidential the training data is, the harder it is going to be or the more resistant an organization will be to feed that into an AI system.”
05:00 — Ranganathan uses an analogy to describe the third dependency, especially as it relates to the creative fields. Consumers want to digest content that a real person wrote or created. “Content is going to have to more clearly declare when they use AI versus when a person wrote everything — that’s just basic integrity,” she says.
06:06 — Ranganathan sums up the three dependencies into three questions:
- How much training data is available?
- How confidential is it?
- What’s the threshold at which you start to move past the written with AI to the material that people themselves wrote?
06:20 — Aaron recalls an interview he watched with Satya Nadella, CEO of Microsoft, about search results and calling out reference sources. “They’re actually having little clickable items, so you can see where that AI was pulling the sources from,” Aaron explains, as users can dig into who wrote an article or if it was pulled from something like a video transcript.
07:03 — Some areas and topics will need human intervention, such as critical race theory or other human experiences that AI doesn’t really have deep context for. AI can be used in a way to launch a framework, for instance, but would need human intervention to put experience inside of the AI-generated ideas. AI can “be an accompaniment to things, but not overtake,” Aaron clarifies.
07:54 — Another unique, creative example that Aaron shares is of a theater group using AI. The group put parameters into an AI model — for different personas for the actors and backstories of characters — then asked it to write a script for a show that they performed in front of a live audience.
08:41 — The fear, uncertainty, and doubt around AI “intruding” on certain areas will require us to take a step back to rethink things. Aaron thinks that this is something that comes along with every stage of new technology.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: