Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » How the New NIST AI Playbook Helps Organizations Effectively Manage Bias
AI and Copilots

How the New NIST AI Playbook Helps Organizations Effectively Manage Bias

Robert WoodBy Robert WoodOctober 27, 2022Updated:December 1, 20224 Mins Read
Facebook Twitter LinkedIn Email
AI bias
Share
Facebook Twitter LinkedIn Email
Acceleration Economy Cybersecurity

The NIST (National Institute of Standards and Technology) AI playbook is being released as a companion document to the NIST risk management framework (RMF). A predominant NIST AI playbook theme is how organizations building or investigating AI capabilities can avoid bias, which is natural, yet undesirable in many cases. The underlying statistical methods used to develop AI capabilities — and the cognitive biases of the people and groups of people implementing them — can reinforce bias, skewing decision-making, and then, because it’s AI, enable those same decisions to be made faster and at greater scale. As such, managing this bias effectively not only has a big impact on risk at organizations, but also empowers them to unlock AI’s tremendous opportunity to drive positive outcomes. This article will touch on the AI playbook’s main takeaways about managing bias.

Garbage in, Garbage Out

Data is foundational to AI. It is used first in development for model training purposes and then in production when those same models are processing data for a decision. It is critical that the data being used to train the model does not bias the model toward a particular outcome.

A possible scenario might be a model that reviews loan submissions for potential risk and makes approval decisions. Providing that model with historical data that might have biases derived from undesirable human cognitive biases, such as unfavorable loan rates or approval outcomes based on factors like gender or race, runs the risk of training the model to behave as though such a historical outcome is optimal and then continue to optimize for it in the future.

This situation represents a systemic bias in the organization. The model may then be reinforcing that systemic bias and, even worse, doing it more efficiently. The approach may also add a perceived level of legitimacy to the decision-making process because it is using cutting-edge techniques. The model may be able to evolve in the future, but it may become more reinforced in its bias over time unless carefully managed. This leads to the next big takeaway.

Governance Boards

These initiatives are not all about technology. When a sales pitch kicks off, the use of AI in a product becomes the shiny object that steals the show. I have observed this same dynamic internal to organizations considering or actively building AI models to support their products or processes. Therefore, the playbook’s emphasis on human oversight was very refreshing for me.

A governance board has an opportunity to serve as a feedback loop and quality control function for AI models.

  • Are these outcomes consistent with our ethics or the organization’s mission?
  • Are we achieving the kind of results we hoped for? Why do we believe this to be the case?
  • Are we managing the organization’s desire for progress and transformation to make sure we’re thinking about impact carefully and intentionally?

A fundamental board goal, to me, is about drawing assumptions out into the light, getting them on paper, and having discussions about them to ensure that they are in line with the organization’s goals. Assumptions tend to remain un-said or un-written, yet they guide much of what we do as individuals. The same is true for organizations.

Red Team Thinking

The governance board also creates an opportunity to apply red-team thinking to the use and operation of AI inside an organization. Red team thinking is looking at a problem or situation through an adversarial lens (e.g., how would my competitor respond to or approach this?).

There is a variety of red-team thinking techniques that can be used, such as:

  • The pre-mortem analysis is a technique originally introduced by Gary Klein to forecast how a project might fail before it ever starts.
  • Ways of seeing, which is where you identify different stakeholders (competitors, regulators, customers, etc.) and look at the problem from their perspective.
  • Analyzing events or outcomes that are unlikely to occur but would be highly problematic if they did. Their leading indicators are then identified as signs to watch out for along the way.

All of these are going to help reduce the potential for cognitive bias to creep into AI. For a more thorough study of the red teaming field, I strongly recommend the Red Team Journal and the Red Team Thinking sites.

Concluding Thoughts

AI, like many emerging technologies, has enormous potential across many industries and problem domains. If done in a way that reinforces problems that exist today, we will simply be making more problems for ourselves, but faster. We’ll also be legitimizing them through a form of self-justification . . . because “math!” Approaching the purpose, development, training, and operations of AI models to minimize systemic, statistical, and human bias will help us take advantage of AI’s power and potential in what we build next.


Want more cybersecurity insights? Visit the Cybersecurity channel:

Acceleration Economy Cybersecurity

ai Artificial Intelligence Cybersecurity data featured Featured Post risk Risk Management
Share. Facebook Twitter LinkedIn Email
Robert Wood

Robert Wood is an Acceleration Economy Analyst focusing on Cybersecurity. He has led the development of multiple cybersecurity programs from the ground up at startups across the healthcare, cyber security, and digital marketing industries. Between experience with startups and application security consulting he has both leadership and hands on experience across technical domains such as the cloud, containers, DevSecOps, quantitative risk assessments, and more. Robert has a deep interest in the soft skills side of cybersecurity leadership, workforce development, communication and budget and strategy alignment. He is currently a Federal Civilian for an Executive Branch Agency and his views are his own, not representing that of the U.S. Government or any agency.

Related Posts

Ajay Patel Talks AI Strategy and Enterprise Adoption Trends | Cloud Wars Live

July 2, 2025

Slack API Terms Update Restricts Data Exports and LLM Usage

July 2, 2025

Google Cloud Still World’s Hottest Cloud and AI Vendor; Oracle #2, SAP #3

July 1, 2025

SignUp Software Insights on Optimizing Dynamics 365 With ISV Partnerships

July 1, 2025
Add A Comment

Comments are closed.

Recent Posts
  • Ajay Patel Talks AI Strategy and Enterprise Adoption Trends | Cloud Wars Live
  • Slack API Terms Update Restricts Data Exports and LLM Usage
  • Google Cloud Still World’s Hottest Cloud and AI Vendor; Oracle #2, SAP #3
  • SignUp Software Insights on Optimizing Dynamics 365 With ISV Partnerships
  • Hottest Cloud Vendors: Google Cloud Still #1, But Oracle, SAP Closing In

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.