As the saying goes, with great power comes great responsibility. Artificial intelligence (AI) wields tremendous power that’s set to disrupt all aspects of life. AI and predictive analytics are being embedded into nearly everything humans interact with, including autonomous cars, e-commerce, utilities, software development, and more. Throughout many areas, AI has the potential to automate tasks, improve efficiency, and enable more accurate decision-making.
However, as AI usage increases, so do the risks associated with its implementation. For example, what happens if a hacker can skew the conditions of an algorithm to favor certain groups? What if a black hat can attack an autonomous drone to reroute or even crash it? If left insecure, unregulated, and ungoverned, AI has the dark potential to put actual human life at risk. As such, securing AI will be critical to reducing risk and ensuring data system safety and end-user safety.
In this analysis, we’ll consider the risk factors inherent in AI’s increasing ubiquity. We’ll also consider the benefits of investing in AI security and summarize high-level best practices to secure AI. In short, since AI/ML (machine learning) is growing in complexity and speed, its adoption must be safeguarded with a secure foundation.
Understanding AI Risk Factors
In recent years, AI and ML have rapidly grown in various industries. However, AI comes with multiple risks, which, if left unmitigated, could bring dire consequences for enterprises of all shapes and sizes.
For one, AI systems are vulnerable to malicious attacks from hackers and other bad actors. This can result in data breaches, unauthorized access to sensitive information, and other security issues. Additionally, AI systems may not be able to detect malicious activity or respond appropriately to changes in the environment. For example, Tesla’s driverless feature has been linked to a number of accidents and deaths in recent years.
“Consider drones that may soon carry people. Currently, the FAA does not regulate cybersecurity even though it’s really become a safety issue,” says Justin S. Daniels, a corporate mergers and acquisitions technology attorney. “Drone manufacturers do not pay close attention to cybersecurity; a hack could take down a drone and seriously injure or kill someone.”
But it’s not only hardware we should be concerned about — digital communication is also fallible. For example, deepfakes and other generative AI technologies make it possible to spread false claims and disinformation.
AI systems are also prone to errors due to their reliance on data and algorithms. The data used to train AI systems may be incomplete, inaccurate, or biased. One report found a secret bias hidden within mortgage applications that automatically denied 80% of submissions from African Americans. Algorithms trained predominantly on Caucasian facial datasets have been found to be racist, and according to the ACLU, AI has the potential to deepen racial and economic inequities.
These AI systems may be vulnerable to adversarial attacks, designed to manipulate AI systems’ output. This can lead to incorrect decisions and inaccurate results. Finally, AI systems are often opaque, meaning that it is difficult to understand why they make certain decisions. Therefore, identifying errors or malicious activity can be difficult and lead to unexpected results.
Benefits of Robust AI Security
If an attacker hijacks an AI system, they could change the underlying principles of how the AI behaves to favor certain outcomes. Or, a company may insert bias into its algorithms for profit. To protect against these threats, Asim Razzaq, co-founder and CEO of Yotascale, calls for increased algorithm transparency so that companies are more open and clear about the ethics behind their decisions.
“Why does Plutonium need a secure foundation? It can be used for good or bad. In the wrong hands, it can wipe out populations,” said Razzaq. Similarly, AI can be used for good or bad, and we must be careful and deliberate in its rollout.
As such, strong security will be critical to shelter AI from the aforementioned external threats. Robust security is essential to help protect the data used to train AI systems from unauthorized access or manipulation. Organizations will also need to ensure the accuracy and reliability of AI systems. To do so, security protocols can be used to verify the accuracy of data and algorithms and to detect any malicious activity or errors. This can help to reduce the risk of tampering, incorrect decisions, and unexpected results.
Finally, a strong security foundation can help to improve transparency. By using security measures such as logging and auditing, it is possible to trace the decisions made by AI systems and understand why they made those decisions. This can also help identify errors and malicious activity to improve the accuracy and reliability of AI.
AI Will Fuel the Next Decade
We are in the very early stages of more widespread AI adoption. The current decade will truly set the course for how humanity will engage with advanced forms of automation. But it’s not just end-user-facing technologies that will change — AI/ML will become embedded within powerful software infrastructure, cloud automation, and innovative AI-as-a-Service offerings. These tools are largely positive, enabling more companies to leverage AI to free up their workforce and increase their bottom line. (The Acceleration Economy Top 10 AI and Hyperautomation shortlist –created for practitioners, by practitioners — features the most innovative vendors and solutions that can help define your AI and Hyperautomation agenda).
Yet, in order to ensure AI serves the betterment of humanity, it must be adequately protected, and the integrity of core algorithms must be free from ethical violations. In time, this will likely require more state-led governance. Now, individual organizations can play their part to ensure AI systems are safe from corruption. One way is to build an internal AI security team, or Center of Excellence, to oversee securing AI systems and educating employees on AI security best practices.
Another method is to develop internal security protocols and consistently apply them across an organization — especially around cloud-native technologies prone to abuse. Security protocols should include measures such as data encryption, access control, and logging. With the right methods in place, AI systems can be more secure and reliable and stave off the risks of errors and malicious activity.
Which companies are the most important vendors in cybersecurity? Click here to see the Acceleration Economy Top 10 Cybersecurity Shortlist, as selected by our expert team of practitioner analysts.
Want more cybersecurity insights? Visit the Cybersecurity channel: