From the government to the private sector, automation with artificial intelligence (AI) and machine learning (ML) is on the rise.
Take, for example, an optical character recognition (OCR) tool like Abbyy Fine Reader, which extracts useful information from documents and stores it in a system, saving time and increasing efficiency and productivity. It’s easy to see how OCR could be integrated into a wide range of companies and organizations, and it’s only a drop in the bucket when it comes to the groundbreaking, innovative AI/ML products and services that are changing the world.
However, AI/ML implementation comes with challenges. New hazards may emerge that cannot be reliably predicted, assessed, or managed since neither the risks themselves, nor the strategies for reducing them, are fully understood.
IBM developed the AI system Watson for Oncology to diagnose cancer; however, it was reported that Watson had recommended “unsafe and incorrect” cancer treatments. Needless to say, this is something that would have caused a lot of harm if used on actual patients. The stakes are unquestionably high when it comes to AI.
Ethics and standards will ensure that the correct process is followed at every stage of AI/ML development and implementation processes, including data collection, processing, and training systems, which will yield better, safer AI/ML.
In this analysis, I’m going to address the relationships between people, technology, data, and processes to guarantee the integrity and consistency of decision-making processes that incorporate AI, ML, and automation broadly.
Why We Need An Ethical AI Framework
AI creates machines that can perform tasks that previously required human intelligence. Massive volumes of data of varying forms are typically used by these systems. Inadequate, skewed, or incorrect data can lead to poorly conceived programs that may have unintended consequences.
The rate at which algorithmic systems are developing is increasing to the point where we may not always be able to figure out how AI arrived at a specific conclusion. I noted above how Watson recommended incorrect treatment, but no one knows how the algorithm came to make such recommendations. Therefore, when making decisions that could have a significant impact on society, we are effectively trusting in systems that we do not completely understand.
Which companies are the most important vendors in AI and Hyperautomation? Click here to see the Acceleration Economy Top 10 AI/Hyperautomation Short List, as selected by our expert team of practitioner-analysts
That’s where AI ethical frameworks need to enter the picture. A great deal of discussion in recent years has focused on the need to develop frameworks and methodologies that would assure that the deployed AI and automated systems are not biased. It has become common practice for organizations to form ad hoc expert groups on AI, tasked with writing policies and standards. Reports and recommendations on AI have apparently been or are in the works from these committees.
Major tech vendors including Microsoft, Google, and SAP have issued AI best practices documents in the past few years, and they continue to update them for different AI applications. Stakeholders’ efforts to issue AI principles and policies show not only the need for ethical guidance but also their considerable interest in shaping the ethics of AI in ways that align with their goals.
Digital Dubai Initiative: An AI Self-Assessment
Many organizations have created frameworks to be followed while building an AI application. We have always seen these frameworks and policies in the form of published white papers or journals; however, the Digital Dubai initiative by the government of Dubai has made this process more intuitive and visual.
To help AI developer companies and AI operator companies evaluate how ethical their own AI systems are, Dubai has developed a self-assessment tool based on its own AI Ethics Guidelines. This tool also helps in identifying guidelines for AI systems and in giving ideas on what kind of mitigation measures could be introduced. This self-assessment tool follows four AI guidelines:
- Make AI systems fair
- Make AI systems accountable
- Make AI systems transparent
- Make AI systems as explainable as possible
The team at Digital Dubai recommends using this tool to evaluate AI systems before implementation begins, and one can make decisions on how to proceed based on the evaluation. Using this tool is simple, as you only need to answer a few questions about your AI application, such as practical uses and mitigation methods. The tool is available in beta here and is free to use for any organization or individual.
Final Thoughts
There’s no denying that putting theories into action is no easy feat. The ethical use of AI is critical in fostering understanding and serving as a catalyst for establishing kindness and a culture of accountability among AI developers. Digital Dubai is a perfect example of putting theory into a practical toolkit.
An approach like this will enable more applications of AI to be developed with a grounding in ethics and standards. I personally recommend such assessment tools to act as a preliminary check for any AI application before it reaches the implementation stage. This will not only create transparency but also make AI applications more reliable and trustworthy.