Using an Artificial Intelligence (AI)-powered search app, I asked for assistance in researching the “state of AI” and the “AI governance challenge” for this article. Here’s what I quickly captured.
- 37% of businesses and organizations employ AI today (Deloitte)
- Nine out of ten leading businesses have investments in AI technologies, but less than 15% deploy AI capabilities in their work. (McKinsey & Co.)
- The global AI market value is expected to reach $267 billion by 2027 and to contribute $15.7 trillion to the global economy by 2030. (World Economic Council)
This was an easy straightforward query. However, today business leaders and data scientists are investing to achieve much bigger AI aspirations. Tech and data professionals and their machines, algorithms and quantum computers are at work solving complex problems, creating breakthrough innovation, and delivering massive leaps in productivity via automation. What possibly could slow AI’s potential?
Answer: Ethics concerns, a lack of trust, and needed oversight.
Scoping AI’s Ethics, Trust and Governance Challenge
Deloitte defines ethics as “the discipline dealing with what is good and bad and with moral duty and obligation, as well as the principles of conduct governing an individual or a group. The aim is to do what’s good for business, and also what’s good for the organization’s employees, clients, customers, and communities in which it operates.”
The problem is consumers, citizens, and employees alike distrust the corporations and government bodies behind AI-infused applications. The best tool at our disposal to overcome this wide-spread distrust is transparent, managed, accountable, and well-communicated governance policies. While it doesn’t solve everything, governance is an important step to advance what’s possible with AI and the people and machines behind it. For perspective, today the development of fair, trustworthy AI standards is primarily up to the discretion of the data scientists and software developers who write and deploy a specific organization’s AI algorithmic models. This means that the elements required to prevent discrimination and ensure transparency vary greatly from company to company.
With AI governance such a critical barrier to overcome and so much on the line for companies and their leaders, two important strategies and decisions emerge:
- How does a company approach AI governance?
- And who should “own” AI governance within your organization?
Approaching AI with a Business Risk Management Mindset
Risk management is a work in progress for most companies’ AI efforts. An AI framework with embedded governance policy and process is required to confidently unleash the power of AI. Just as importantly, without governance companies will have a difficult time using AI at scale as government and industry regulators and compliance officers are sure to descend on unmanaged practices.
To level set where we are on AI governance, McKinsey & Co. recently released its findings in its “State of AI” research study. The executive survey asked about a range of risk-mitigation practices related to AI model documentation, data validation, and bias review. When asked why companies aren’t mitigating all relevant risks, respondents most often say it’s because they lack capacity to address the full range of risks they face. Notably, nearly 1/3 of companies are unclear on the extent of their exposure to AI risks. And geography and economic status are factors impacting governance and oversight. Survey respondents in emerging economies reported that most are waiting until clearer regulations for risk mitigation are in place. Why? Most do not have the leadership commitment to address AI risk mitigation.
Creating an AI Governance Framework, Policy and Process
Despite the slow governance adoption of AI governance, McKinsey’s findings and compliance and legal experts point to the importance of having an AI governance framework in place. Frameworks set goals and guidelines for AI throughout the product lifecycle, from research and design, to build and train, to change and operate. The framework should also address how a specific organization should and is addressing artificial intelligence (AI) from both an ethical and a legal point of view. Among the important ethical and legal elements to include in the AI governance framework and policy are:
- Monitoring bias in the use of data – creating discrimination, inequity and bias
- Accountability and process to address and fix when something goes wrong
- Reporting and communicating to whom by when errors are made
While several organizations are working to put standard governance policies in place, including Amazon. Google and Microsoft, no universal AI governance standard, regulation or policy exists.
Establishing Clear Ownership and Oversight of AI Governance Policy and Management
With AI legal and ethical liability lurking, government regulators and compliance organizations are calling for Board-level, CEO and CFO oversight of AI use and policy. Accountability must be at the CXO level. And oversight and management must be shared across the company, including baking AI ethics into company values, culture and reward systems. And legal and ethics experts advise every person involved in the AI development process should sign off on company AI policy, including company officers.
According to Deloitte strategists, here is a breakdown of the 4 core areas that are critical for AI governance and what leaders should focus on.
- Technology, data, and security. Define and document the organization’s comprehensive AI lifecycle, including the ways it builds and tests data and models into AI-powered solutions. Oversight comes from information, technology, data, security, and privacy leaders.
- Risk management and compliance. Identify how the organization develops and enforces policies, procedures, and standards for AI solutions. Align these with the organization’s mission, goals, and legal or regulatory requirements. Risk, compliance, and legal leaders play a role here.
- People, skills, organizational models, and training. Understand and monitor how AI impacts employee, customer, and partner experiences. Assess how AI is impacting roles and organizational models. Educate the workforce using training and certification to retool and upskill knowledge. Human resources leaders share responsibility with learning and development teams, compliance officers, and the broader executive leadership.
- Public policy, legal and regulatory frameworks, and impact on society. Capture the level of understanding and acceptance AI has across your business and company culture. Commit to monitoring in-development regulations and their impact on your AI efforts. Be proactive and prepare for the impact this will have on all your stakeholders.
To summarize, the chief information officer, chief risk officer, chief compliance officer, and chief financial officer have leadership roles across the first three areas, while the fourth area relies on leadership from legislators, regulatory agencies, and other policymaking bodies. In a large enterprise, the chief data officer is also typically tasked with developing, implementing, and monitoring the organization’s responsible AI data framework.
Final Thoughts on Ensuring Accountability and Enabling Oversight of AI Governance
Developing AI governance and ethics framework and establishing oversight to manage risk is essential to unleash the full potential of AI. The next challenge is empowering and coaching executives on why, what, and how to ensure AI governance. Because AI and policy is still relatively new, few execs or officers are equipped or trained in what’s required and how to manage and implement AI governance oversight.