The NIST (National Institute of Standards and Technology) AI playbook is being released as a companion document to the NIST risk management framework (RMF). A predominant NIST AI playbook theme is how organizations building or investigating AI capabilities can avoid bias, which is natural, yet undesirable in many cases. The underlying statistical methods used to develop AI capabilities — and the cognitive biases of the people and groups of people implementing them — can reinforce bias, skewing decision-making, and then, because it’s AI, enable those same decisions to be made faster and at greater scale. As such, managing this bias effectively not only has a big impact on risk at organizations, but also empowers them to unlock AI’s tremendous opportunity to drive positive outcomes. This article will touch on the AI playbook’s main takeaways about managing bias.
Garbage in, Garbage Out
Data is foundational to AI. It is used first in development for model training purposes and then in production when those same models are processing data for a decision. It is critical that the data being used to train the model does not bias the model toward a particular outcome.
A possible scenario might be a model that reviews loan submissions for potential risk and makes approval decisions. Providing that model with historical data that might have biases derived from undesirable human cognitive biases, such as unfavorable loan rates or approval outcomes based on factors like gender or race, runs the risk of training the model to behave as though such a historical outcome is optimal and then continue to optimize for it in the future.
This situation represents a systemic bias in the organization. The model may then be reinforcing that systemic bias and, even worse, doing it more efficiently. The approach may also add a perceived level of legitimacy to the decision-making process because it is using cutting-edge techniques. The model may be able to evolve in the future, but it may become more reinforced in its bias over time unless carefully managed. This leads to the next big takeaway.
Governance Boards
These initiatives are not all about technology. When a sales pitch kicks off, the use of AI in a product becomes the shiny object that steals the show. I have observed this same dynamic internal to organizations considering or actively building AI models to support their products or processes. Therefore, the playbook’s emphasis on human oversight was very refreshing for me.
A governance board has an opportunity to serve as a feedback loop and quality control function for AI models.
- Are these outcomes consistent with our ethics or the organization’s mission?
- Are we achieving the kind of results we hoped for? Why do we believe this to be the case?
- Are we managing the organization’s desire for progress and transformation to make sure we’re thinking about impact carefully and intentionally?
A fundamental board goal, to me, is about drawing assumptions out into the light, getting them on paper, and having discussions about them to ensure that they are in line with the organization’s goals. Assumptions tend to remain un-said or un-written, yet they guide much of what we do as individuals. The same is true for organizations.
Red Team Thinking
The governance board also creates an opportunity to apply red-team thinking to the use and operation of AI inside an organization. Red team thinking is looking at a problem or situation through an adversarial lens (e.g., how would my competitor respond to or approach this?).
There is a variety of red-team thinking techniques that can be used, such as:
- The pre-mortem analysis is a technique originally introduced by Gary Klein to forecast how a project might fail before it ever starts.
- Ways of seeing, which is where you identify different stakeholders (competitors, regulators, customers, etc.) and look at the problem from their perspective.
- Analyzing events or outcomes that are unlikely to occur but would be highly problematic if they did. Their leading indicators are then identified as signs to watch out for along the way.
All of these are going to help reduce the potential for cognitive bias to creep into AI. For a more thorough study of the red teaming field, I strongly recommend the Red Team Journal and the Red Team Thinking sites.
Concluding Thoughts
AI, like many emerging technologies, has enormous potential across many industries and problem domains. If done in a way that reinforces problems that exist today, we will simply be making more problems for ourselves, but faster. We’ll also be legitimizing them through a form of self-justification . . . because “math!” Approaching the purpose, development, training, and operations of AI models to minimize systemic, statistical, and human bias will help us take advantage of AI’s power and potential in what we build next.
Want more cybersecurity insights? Visit the Cybersecurity channel: