In episode 109 of the AI/Hyperautomation Minute, Toni Witt breaks down five implications of the “black box problem” that companies could face without having explainability with artificial intelligence (AI) systems.
This episode is sponsored by Acceleration Economy’s Generative AI Digital Summit. View the event, which features practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, cybersecurity, and more, by registering for your free on-demand pass.
Highlights
00:30 — The black box problem is essentially when it’s not clear how a certain AI system or machine learning model arrived at a certain output given an input. With many AI systems, they spit out a prediction or a classification. However, they come without an explanation of how they reached that conclusion or defining what features were considered.
01:00 — This is also referred to as “explainability,” which is the ability of all stakeholders in your company to know how an AI system is being used and how it makes decisions based on the data. Not having an explanation can have huge implications for companies.
Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.
01:22 — The first implication is auditability and compliance. Explainability is vital to ensuring there isn’t bias in AI systems or elements that would lead to discrimination in decision-making processes.
01:50 — The second is challenges with internal adoption. Without explainability, teams are going to be more reluctant to adopt AI tooling within their workflows.
02:14 — The potential for customers to lose trust is the third problem. Customers may have questions regarding your company and underlying algorithms. If you aren’t able to provide answers, customer trust may decrease.
02:41 — Debugging and guiding interventions is the fourth implication. This is primarily for engineers and developers, as they need to know how an AI system functions in order to debug and guide interventions.
02:58 — The fifth problem is that it’s “harder to make business decisions and evaluate if a model is actually performing against your business objectives if you have no idea if it’s actually working or not,” Toni explains.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: