Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » How to Make AI Explainable and Unlock Synergy With Humans
AI and Copilots

How to Make AI Explainable and Unlock Synergy With Humans

Toni WittBy Toni WittJune 2, 20237 Mins Read
Facebook Twitter LinkedIn Email
Explainability AI Black Box Problem
Share
Facebook Twitter LinkedIn Email

You may have heard the word ‘explainability’ or ‘interpretability’ mentioned a few times alongside artificial intelligence (AI). I want to break down this concept and explain why it matters to your organization, as well as provide some high-level strategies to increase explainability as you develop and deploy AI solutions.

Explainability, Explained

Explainability is exactly what it sounds like. It is the ability of all stakeholders in your organization to understand why a machine learning (ML) model came to a certain output given an input, or how a model came to a decision. In a general sense, it’s about knowing what’s going on within the model.

The lack of explainability is often called the ‘black box problem,’ wherein an ML model produces outputs based on the given inputs but the process is unclear. This can occur for a handful of reasons, including:

  • The user lacks technical knowledge
  • Use of a low-quality data set
  • The model architecture doesn’t fit the data set or task
  • The model wasn’t developed and trained properly

If you’re using deep learning networks, the black box problem is also somewhat inherent in how the network self-adjusts its many parameters to produce an output that coheres with the training data set. While you can’t know all the ideal states of parameters beforehand – that’s the magic of training neural networks – it’s important to have a rough understanding of how the model operates if you want to deploy it.

Challenges of the Black Box Problem

The black box problem comes down to the difference between correlation and causation: Even if a model finds an arbitrary correlation between inputs and outputs, and produces ‘useful’ outputs as a result, we like to know why a certain input led to a certain output (i.e., the causation). This is critical in business applications of ML models as well, given that organizations need to know why certain predictions or classifications were made in order to act on them.

The lack of explainability comes with many problems for businesses. Here are just a few:

  • Customers will lose trust in your system. We’ve all received recommendations with YouTube videos or Netflix programs that the underlying AI algorithm thought we would like. But rather, it often left us wondering, “Why does it think I would like that?” If that happens too often, customers may lose trust in your product.
  • Employees will lose trust in your system or avoid using it altogether. To illustrate, many sales teams at established companies face this problem. I was speaking recently with a sales manager at Morton Salt, a salt company that started in the mid-19th century, who shared that the company is resistant to building AI into its workflow because employees don’t understand it. Without an understanding, executives, and employees would rather rely on their intuition as developed through experience. If your models consistently produce erratic and unexplainable results, or nobody knows how they actually work, you’ll face serious challenges with internal adoption.
  • Auditability and compliance. Imagine a bank using AI to determine the size of a loan given to a customer. If the model accidentally relies on the wrong factors to determine the loan size, like someone’s ethnicity, this runs counter to many anti-discrimination laws. While the loan scenario is mostly ironed out, there are many emerging uses of AI that face similar problems. We need to understand how a model came to a decision to comply with regulation, most of which right now is designed around tackling bias and unfairness in model outcomes.
  • Debugging and guiding interventions. If you walk into the engine room of a large ship, you’ll see many gauges giving an indication of what is going on inside the complex machines all around you. Ship engineers use these gauges to monitor performance and make repairs if necessary. The same needs to be true for developing machine learning models.
  • Harder to make business decisions. Without explainability, it becomes hard to evaluate if a model and its implementation met business needs, and what actions to take based on the outputs.

Explainability Is Key

Although it sounds a little wishy-washy, explainability can be built into the AI system very directly. Recently, I was at the HIMSS 2023 Health Tech conference and one of the speakers presented a computer vision model that predicts whether or not a spot on your skin is a malignant or benign growth. Explainability is key in this application not only because lives are at stake, but because doctors frequently get involved – a black box doesn’t lend itself to human intervention.

To resolve this issue, the speaker’s team developed a way of producing a relevancy graph that showed the extent to which the ML model considered each pixel when making its decision. Some of the pixels were bright pink, meaning the model heavily weighed this pixel in its final prediction. This is a basic example of boosting explainability because doctors could use this relevancy graph to see if a model made an error, possibly by considering a strand of hair or tattoo in its decision to determine malignancy.

In this way, explainability is the key to unlocking powerful synergy between humans and AI, which (as of now) remains the best strategy for businesses adopting the technology.

Enterprise-Friendly Explainability

Considering how vital explainability is for businesses adopting AI, here are some enterprise-friendly ways of boosting explainability:

  • Have an AI governance team and ethics board, and use AI frameworks. These measures will align your organization with its values throughout the AI development and deployment process.
  • Choose your models wisely. Without getting into the weeds, some models are easier to interpret than others. Decision trees, logistic regressions, and linear regressions are some of the simplest types of ML models and are very easy to understand. You don’t always need a 100 billion+ parameter mega-model to provide value to your organization.
  • Feature selection. Carefully identify which features of your input data set you want to consider in making predictions or classifications. Check for regulations around which features can be considered.
  • Visualization tools (e.g., the relevancy graph in the healthcare example above)
  • Benchmark models for bias and fairness
  • Use synthetic or alternate data sets
  • Cross-sectional education. Ideally, everyone in your company using or deploying AI systems should be familiar with the basics. This will boost internal adoption and empower everyone to make better decisions using AI. Despite all the talk about AI replacing jobs, the most pressing concern for organizations is re-skilling their existing teams.
  • Work with the Acceleration Economy AI & Hyperautomation Top 10 companies. These companies have been through it all. They can help you every step of the way and offer everything from DIY platforms to white glove professional services.

Final Thoughts

The black box problem is not one you want to face. But in most cases, it’s not a hard one to solve. Usually, it comes down to internal stakeholders having a lack of knowledge or familiarity with AI that slows internal adoption, leads to inferior business decisions, or results in biased or malfunctioning models.

Explainability goes hand-in-hand with the democratization of AI. While AI development and use were limited to data scientists even a decade ago, we now see an explosion of low-code and turnkey AI solutions. But these products aren’t enough: Now every organization has a responsibility to upskill their team around the effective use of AI.


Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel:

ai Artificial Intelligence data featured hyperautomation
Share. Facebook Twitter LinkedIn Email
Analystuser

Toni Witt

Co-founder, Sweet
Cloud Wars analyst

Areas of Expertise
  • AI/ML
  • Entrepreneurship
  • Partners Ecosystem
  • Website
  • LinkedIn

In addition to keeping up with the latest in AI and corporate innovation, Toni Witt co-founded Sweet, a startup redefining hospitality through zero-fee payments infrastructure. He also runs a nonprofit community of young entrepreneurs, influencers, and change-makers called GENESIS. Toni brings his analyst perspective to Cloud Wars on AI, machine learning, and other related innovative technologies.

  Contact Toni Witt ...

Related Posts

Microsoft Makes Major Push Into AI Agent Interoperability with New MCP Rollouts

May 23, 2025

Microsoft’s Latest Release Announcements Aimed at Streamlining Agentic AI, Increasing Accessibility

May 23, 2025

Microsoft and OpenAI Could Revise Partnership Terms Ahead of Potential OpenAI IPO

May 23, 2025

IBM Research Sheds New Light on AI Agents’ Impact Across Org Structures, Business Functions

May 22, 2025
Add A Comment

Comments are closed.

Recent Posts
  • Microsoft Makes Major Push Into AI Agent Interoperability with New MCP Rollouts
  • Microsoft’s Latest Release Announcements Aimed at Streamlining Agentic AI, Increasing Accessibility
  • Microsoft and OpenAI Could Revise Partnership Terms Ahead of Potential OpenAI IPO
  • IBM Research Sheds New Light on AI Agents’ Impact Across Org Structures, Business Functions
  • SAP Says Reports of Applications’ Death Are Greatly Exaggerated!

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.