Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » How Cybersecurity Teams Can Identify and Manage Generative AI Risk
Cybersecurity

How Cybersecurity Teams Can Identify and Manage Generative AI Risk

Robert WoodBy Robert WoodApril 20, 2023Updated:April 20, 20234 Mins Read
Facebook Twitter LinkedIn Email
generative AI risk
Share
Facebook Twitter LinkedIn Email
Acceleration Economy Cybersecurity

From ChatGPT to Midjourney, generative artificial intelligence (AI) has taken the world by storm. Every week, more new generative AI tools are released, taking on tasks such as image creation, writing, video editing, social media management, and creating summaries of other content. Many industries have started to adapt to support tailored use cases, leveraging a human-plus-AI workflow.

While many of these tools are benign, their use in some business contexts could prove problematic from a security and privacy standpoint. Just recently, Samsung was caught up in security-related headlines as some of their proprietary source code was leaked from employees using ChatGPT. 

In this analysis, I’ll explore ways that organizations should think about generative AI models from a risk perspective, with a special focus on the conversations security leaders should be having with senior leadership about the new technology. 

Identifying the Risks of Generative AI

You can only manage risk by first identifying it. Unfortunately, the range of generative AI tools available today introduces a potentially expansive risk surface area. There are three possible scenarios:

  • Misinformation used against your organization: deepfake video, audio, or written content
  • Misinformation generated by your organization: content generated using generative AI models but not fact-checked
  • Privacy and data misuse: generative AI used for a task requiring sensitive data to be uploaded or fed into the tool/service

Each scenario should be approached differently, with its own unique plans and mitigation tactics from a risk management perspective, though all will include conversations with your organization’s PR, communications, and legal teams.

Additionally, each organization is going to have its own unique generative AI risks to consider. In the Samsung case mentioned earlier, they had proprietary data. A healthcare organization such as a hospital would have user privacy concerns relating to the Health Insurance Portability and Accountability Act (HIPAA). Therefore, security leaders need to look at this spectrum of generative AI tools and use cases through the lens of their specific organization.

Below, we’ll expand on what to do in the third scenario: privacy and data misuse.

Approaching Risk Management for Data and Privacy Misuse

On an elemental level, managing the risk of emerging AI tools in a similar way to addressing shadow IT. Monitoring network traffic, controlling access to sensitive data, and deploying edge controls like SASE (secure access service edge) capabilities can all help prevent the sprawl of data from being passed into unmanaged and unauthorized tools.

Additionally, security leaders must have more strategic conversations in their organizations regarding generative AI. Security leaders have a tremendous opportunity to engage their peers in senior leadership in a strategic risk-reward conversation regarding generative AI. One of the reasons this technology has garnered so much interest is its potential to be disruptive. If we are to serve as business enablers within our organizations, we have a responsibility to engage proactively to find safe ways to use these tools while continuing to protect them.

Engage your peers to identify the types of tools they want to use, their use cases, the data they may need, and the intended outcomes. Marketing may want to write better and more efficient copy for the website and social media. Developers may want to write and review code more efficiently. The security team may want to analyze code for potential vulnerabilities. The legal team might want to analyze complex contractual terms to augment their team’s limited capacity to review and process contracts across the supply chain.

As you’re identifying use cases, data that might be needed, and possible tools, you now have a goldmine of context for creating more proactive training, safe use guidelines, and potentially even authorized lists of tools for use. Find a way to keep these conversations alive over time. Use cases will evolve quickly, and people usually want to do the right thing. The more security teams can position themselves in a servant leadership and enabler role, the better.

Concluding Thoughts

There’s a lot to analyze around the security implications of generative AI. We don’t know everything today; things are changing extremely rapidly with the pace of development of tools and capabilities. One of the best things we can be doing right now is leaning into and leading the conversation within our organizations and identifying how this new technology could be used in service of the mission. This conversation shouldn’t be a static, one-time event. It should be an ongoing and fluid conversation between security and the myriad stakeholders across our organizations. 


Want more cybersecurity insights? Visit the Cybersecurity channel:

Acceleration Economy Cybersecurity

ai Artificial Intelligence data data management risk Risk Management supply chain vulnerability
Share. Facebook Twitter LinkedIn Email
Robert Wood

Robert Wood is an Acceleration Economy Analyst focusing on Cybersecurity. He has led the development of multiple cybersecurity programs from the ground up at startups across the healthcare, cyber security, and digital marketing industries. Between experience with startups and application security consulting he has both leadership and hands on experience across technical domains such as the cloud, containers, DevSecOps, quantitative risk assessments, and more. Robert has a deep interest in the soft skills side of cybersecurity leadership, workforce development, communication and budget and strategy alignment. He is currently a Federal Civilian for an Executive Branch Agency and his views are his own, not representing that of the U.S. Government or any agency.

Related Posts

How One Company Added 20% to Profit with Cloud Optimization

June 27, 2025

AI Agent & Copilot Podcast: ServiceNow Innovation Officers Outline Agentic AI Opportunities in Healthcare

June 27, 2025

Workday Sets the Standard for Responsible AI with Dual Governance Accreditations

June 27, 2025

The AI Economy: Oracle More Valuable than Disney, Goldman Sachs, and Uber Combined

June 26, 2025
Add A Comment

Comments are closed.

Recent Posts
  • AI Agent & Copilot Podcast: ServiceNow Innovation Officers Outline Agentic AI Opportunities in Healthcare
  • How One Company Added 20% to Profit with Cloud Optimization
  • Workday Sets the Standard for Responsible AI with Dual Governance Accreditations
  • The AI Economy: Oracle More Valuable than Disney, Goldman Sachs, and Uber Combined
  • Microsoft Delivers In-Depth View of Security, Governance Functions in Copilot Control System

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.