Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » How Cybersecurity Pros Can Stop Malicious Actors Looking to Exploit ML
Cybersecurity

How Cybersecurity Pros Can Stop Malicious Actors Looking to Exploit ML

Chris HughesBy Chris HughesMarch 13, 2023Updated:March 13, 20234 Mins Read
Facebook Twitter LinkedIn Email
machine learning cybersecurity
Share
Facebook Twitter LinkedIn Email
Acceleration Economy Cybersecurity

Emerging technologies such as artificial intelligence (AI) and machine learning (ML) have caught a lot of attention in the last several years due to their power. It isn’t uncommon to mix up AI and ML. AI is a computer’s ability to emulate or mimic human thought processes, and ML is a subset of AI that identifies patterns that enable improved decision-making, or even automate it, by utilizing technologies and algorithms. Organizations are looking to use ML for all sorts of value-added activities that can drive efficiencies, cost savings, increased revenue, and customer satisfaction. It can even be applied to cybersecurity.

With the rise of ML, there’s been an opportunistic response from malicious actors: adversarial machine learning, a method of trying to trick ML models by providing deceptive input. This analysis will focus on adversarial ML: We’ll explore what adversarial ML is, how it can be leveraged by organizations for improved decision-making, and how to secure ML models from malicious actors.

See the Cybersecurity Top 10 shortlist

ML in Cybersecurity

Businesses use ML in various use cases. ML can be used to make targeted recommendations for customers; perform fraud detection; optimize search results; and implement chatbots for both improved customer experience and internal organizational efficiency.

Businesses are also using ML for cybersecurity functions such as identifying anomalous behaviors in enterprise information technology (IT) environments and, in some cases, automating responses to mitigate attacks. Cloud provider AWS has a service named “GuardDuty” that uses ML to identify malicious activity and notify users to activate incident response and thereby improve security posture.

GuardDuty and other cloud-native services like it can help determine baseline operations in digital environments to spot anomalies. This can help address challenges such as resource and staffing shortages as well as limitations on the number of behaviors humans can identify and analyze in our digital domain.

Insights into the Why & How to Secure SaaS Applications_featured
Guidebook: Secure SaaS Applications

The Emergence of Adversarial ML

That said, bad actors are looking to abuse these technologies for their own benefits. We’ve already seen accounts of malicious actors using technologies such as AI to write malicious code. There is also the emergence of adversarial ML, which, as stated earlier, is a way of trying to trick ML models by providing deceptive input.

Adversarial ML attacks typically occur in one of two ways. The first involves what is known as classification evasion, where the attacker is trying to hide malicious content and get it past the algorithm’s built-in and trained filters. The second is where the attacker is trying to actually poison the learning process by introducing fake or malicious data to compromise the algorithm’s output.

Organizations making use of ML can take steps to secure their use of ML and its respective models and algorithms. These steps include utilizing techniques such as adversarial training and defensive distillation.

  • Adversarial training involves intentionally introducing potentially malicious content into the ML models to monitor the potential implications and impact of actual malicious activity.
  • Defensive distillation involves making ML algorithms more flexible so they aren’t as susceptible to malicious attacks. The technique works by training one model to predict the probabilities of another, which has also been trained by earlier baseline standards. This iterative approach helps emphasize accuracy and minimize the success of malicious attacks on the model. It is also probabilistic, making it a bit more flexible than the previously mentioned approach of adversarial training, which requires constant explicit inputs to see how the ML model(s) respond. This makes the distillation approach more dynamic and able to predict against unknown threats, but it also creates more potential for rejecting the manipulation being attempted.

Both adversarial training and defensive distillation can be identified and exploited by malicious actors, but they are still key practices to try to thwart adversarial ML and malicious actors looking to compromise a business’ use of ML.

Thankfully, these attack methods aren’t yet widely adopted, but as organizations continue to make more use of AI and ML to enable business decisions and activities, it’s likely that malicious actors will continue trying to compromise them.

Conclusion

It should come as no surprise that malicious actors have identified and honed ways to compromise emerging technologies. Just like business leaders are looking to utilize emerging technologies to drive business value and outcomes, malicious actors are looking to use the same technologies to improve efficiencies on their end and maximize their ability to exploit unsuspecting victims.

This constant cat-and-mouse reality has always been the state of affairs in cybersecurity. By employing the strategies described in this analysis, CISOs and other security leaders can bolster their organizational defenses to continue to protect and enable business outcomes.


Want more cybersecurity insights? Visit the Cybersecurity channel:

Acceleration Economy Cybersecurity

Artificial Intelligence CISO Cybersecurity featured Machine Learning
Share. Facebook Twitter LinkedIn Email
Analystuser

Chris Hughes

CEO and Co-Founder
Aquia

Areas of Expertise
  • Cloud
  • Cybersecurity
  • LinkedIn

Chris Hughes is a Cloud Wars Analyst focusing on the critical intersection of cloud technology and cybersecurity. As co-founder and CEO of Aquia, Chris draws on nearly 20 years of IT and cybersecurity experience across both public and private sectors, including service with the U.S. Air Force and leadership roles within FedRAMP. In addition to his work in the field, Chris is an adjunct professor in cybersecurity and actively contributes to industry groups like the Cloud Security Alliance. His expertise and certifications in cloud security for AWS and Azure help organizations navigate secure cloud migrations and transformations.

  Contact Chris Hughes ...

Related Posts

AI Agent & Copilot Podcast: JP Morgan Chase CISO Publicly Pushes for Stronger Security Controls

May 8, 2025

ServiceNow Re-Invents CRM for End-to-End Enterprise

May 8, 2025

Inside ServiceNow 2025: How AI, Strategic Partnerships, and Platform Unification Are Reshaping Enterprise IT

May 7, 2025

Bill McDermott Calls Out ‘Collapse of 20th-Century Software-Industrial Complex’

May 7, 2025
Add A Comment

Comments are closed.

Recent Posts
  • AI Agent & Copilot Podcast: JP Morgan Chase CISO Publicly Pushes for Stronger Security Controls
  • ServiceNow Re-Invents CRM for End-to-End Enterprise
  • Inside ServiceNow 2025: How AI, Strategic Partnerships, and Platform Unification Are Reshaping Enterprise IT
  • Bill McDermott Calls Out ‘Collapse of 20th-Century Software-Industrial Complex’
  • With Latest Agentic AI Products, ServiceNow Embraces Third-Party Platforms, Data Sources

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.