Emerging technologies such as artificial intelligence (AI) and machine learning (ML) have caught a lot of attention in the last several years due to their power. It isn’t uncommon to mix up AI and ML. AI is a computer’s ability to emulate or mimic human thought processes, and ML is a subset of AI that identifies patterns that enable improved decision-making, or even automate it, by utilizing technologies and algorithms. Organizations are looking to use ML for all sorts of value-added activities that can drive efficiencies, cost savings, increased revenue, and customer satisfaction. It can even be applied to cybersecurity.
With the rise of ML, there’s been an opportunistic response from malicious actors: adversarial machine learning, a method of trying to trick ML models by providing deceptive input. This analysis will focus on adversarial ML: We’ll explore what adversarial ML is, how it can be leveraged by organizations for improved decision-making, and how to secure ML models from malicious actors.
ML in Cybersecurity
Businesses use ML in various use cases. ML can be used to make targeted recommendations for customers; perform fraud detection; optimize search results; and implement chatbots for both improved customer experience and internal organizational efficiency.
Businesses are also using ML for cybersecurity functions such as identifying anomalous behaviors in enterprise information technology (IT) environments and, in some cases, automating responses to mitigate attacks. Cloud provider AWS has a service named “GuardDuty” that uses ML to identify malicious activity and notify users to activate incident response and thereby improve security posture.
GuardDuty and other cloud-native services like it can help determine baseline operations in digital environments to spot anomalies. This can help address challenges such as resource and staffing shortages as well as limitations on the number of behaviors humans can identify and analyze in our digital domain.
The Emergence of Adversarial ML
That said, bad actors are looking to abuse these technologies for their own benefits. We’ve already seen accounts of malicious actors using technologies such as AI to write malicious code. There is also the emergence of adversarial ML, which, as stated earlier, is a way of trying to trick ML models by providing deceptive input.
Adversarial ML attacks typically occur in one of two ways. The first involves what is known as classification evasion, where the attacker is trying to hide malicious content and get it past the algorithm’s built-in and trained filters. The second is where the attacker is trying to actually poison the learning process by introducing fake or malicious data to compromise the algorithm’s output.
Organizations making use of ML can take steps to secure their use of ML and its respective models and algorithms. These steps include utilizing techniques such as adversarial training and defensive distillation.
- Adversarial training involves intentionally introducing potentially malicious content into the ML models to monitor the potential implications and impact of actual malicious activity.
- Defensive distillation involves making ML algorithms more flexible so they aren’t as susceptible to malicious attacks. The technique works by training one model to predict the probabilities of another, which has also been trained by earlier baseline standards. This iterative approach helps emphasize accuracy and minimize the success of malicious attacks on the model. It is also probabilistic, making it a bit more flexible than the previously mentioned approach of adversarial training, which requires constant explicit inputs to see how the ML model(s) respond. This makes the distillation approach more dynamic and able to predict against unknown threats, but it also creates more potential for rejecting the manipulation being attempted.
Both adversarial training and defensive distillation can be identified and exploited by malicious actors, but they are still key practices to try to thwart adversarial ML and malicious actors looking to compromise a business’ use of ML.
Thankfully, these attack methods aren’t yet widely adopted, but as organizations continue to make more use of AI and ML to enable business decisions and activities, it’s likely that malicious actors will continue trying to compromise them.
Conclusion
It should come as no surprise that malicious actors have identified and honed ways to compromise emerging technologies. Just like business leaders are looking to utilize emerging technologies to drive business value and outcomes, malicious actors are looking to use the same technologies to improve efficiencies on their end and maximize their ability to exploit unsuspecting victims.
This constant cat-and-mouse reality has always been the state of affairs in cybersecurity. By employing the strategies described in this analysis, CISOs and other security leaders can bolster their organizational defenses to continue to protect and enable business outcomes.
Want more cybersecurity insights? Visit the Cybersecurity channel: