Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » AI Security: Practical Ways Microsoft Users Can Tap Purview to Lock Down Data in AI Use Cases
AI and Copilots

AI Security: Practical Ways Microsoft Users Can Tap Purview to Lock Down Data in AI Use Cases

Tom SmithBy Tom SmithJune 18, 20255 Mins Read
Facebook Twitter LinkedIn Email
Share
Facebook Twitter LinkedIn Email

GenAI has been the biggest driver of tech industry innovation this century, with remarkable progress and advances taking place in the less than three years since ChatGPT burst onto the scene.

Despite the progress and innovation surrounding GenAI, security has been a looming concern: Microsoft Data Security Index Report finds that more than 80% of business leaders point to potential leakage of sensitive data as their main concern regarding GenAI.

The good news: An increasing number of initiatives are playing out to enhance the security of GenAI, as well as AI agents that act with a higher level of autonomy than earlier generations of the technology. Other recent developments include:

  • Emergence of Red Teaming as an approach to enhance AI agent security
  • Community project lays out security measures for Model Context Protocol (MCP)
  • Zenity partners with Microsoft to maximize Copilot security (video)

And those are just three recent developments we’ve highlighted.

Now, Microsoft is outlining specific measures customers and partners can take to leverage its Purview data security and governance platform to protect data in GenAI applications and use cases.

This analysis will present several of those recommended measures, which fall into three categories: utilizing Purview’s AI Hub, AI Analytics, and Policies functionality.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.

Security Risks Introduced by GenAI

In an online event held last week, Microsoft set the context for use of Purview to protect data in GenAI use cases by detailing some of the security risks that come with the groundbreaking technology. They include:

  • Overexposure of data by negligent or uninformed insiders who might create documents without appropriate access controls, thereby making it easy for other users to reference that document in a Large Language Model (LLM) or Copilot
  • Data leaks precipitated by disgruntled insiders who might use GenAI to find confidential information then proceed to leak that same information
  • Data leaks by negligent insiders, such as one who shares sensitive data in consumer GenAI apps

Such threats can be exacerbated in those companies that use a wide range of security platforms. Not surprisingly, Microsoft positions Purview as a turnkey security platform that can consolidate multiple functions in one system, but it can’t be denied that the current state of play — with an average of 10+ security platforms used to secure the typical corporate data estate — can lead to complexity, security gaps, and management challenges.

Purview AI Hub

As explained by Michael Lord, Security Global Black Belt at Microsoft, Purview AI Hub provides visibility into usage of GenAI and how data is being used within a company’s IT environment; this includes use of Copilot.

Information protection labeling functionality can be applied so that content access is controlled, limited only to the people that should have access to it.

For example, in SharePoint, content can be labeled using Purview information protection functions, with sensitivity labels applied when the content enters the SharePoint site, while also applying inheritance of these classifications to all incoming information. An admin can define who has access to individual documents. Enforcement of a data loss prevention, or DLP, policy would prevent an individual, for example, from cutting and pasting information into ChatGPT. If they did attempt to paste information, it would be blocked.   

Purview AI Hub

AI Data Analytics

AI Hub has the ability to report on activities related to data and applications in a GenAI context across the entire data estate, providing actionable analytic insights into behaviors, interaction with sensitive data, and how that applies to GenAI.

Analytics insights empower admins to prioritize critical alerts and gain awareness about any high priority data either leaving, or attempting to leave, a corporate security perimeter, then act accordingly.

Data analytics also provides a view on whether, and to what extent, non-compliant and unethical use of AI is occurring within the environment. “This aggregation of all of these alerts really does provide you with trends and give you good insight into AI interactions within the environment… further, it gives you concepts of sensitive interactions per app, so you can look at the different things that are are being used within the environment, the different generative AI applications,” Lord said during the virtual event.  

Purview’s AI Data Analytics

Policies

Admins can also configure policies that help prevent data loss that could occur through AI prompts and responses. Microsoft helps build sample policies — which can be modified allowing Purview solutions such as DLP policies and communication related to compliance. The goal: provide an integrated view of all of an enterprise’s AI actions together in a single, unified data protection strategy.

This includes applying policies to scan, classify, and label data consistently acoss the data estate, which can include Microsoft 365, Azure SQL, Azure Data Lake, Amazon S3 Buckets and other structured data sources.

Purview Policies

Closing Thoughts

GenAI and the other AI apps and tools it has spawned, including AI agents, create new and potentially unknown security risks that go beyond the obvious or predictable. Especially in an agentic AI context, agents that take actions autonomously could have unintended consequences, including consequences that could potentially compromise security.

No one vendor or platform will be able to fully lock down all AI activities, applications, or tools, but comprehensive platforms such as Purview — and Microsoft’s targeted guidance for how to fully leverage Purview — are positive steps toward raising security awareness and ensuring customers are fully capitalizing on the tools they’re already using to proceed as securely as possible with AI use cases while maintaining a strong focus on innovating with AI technology.


Ask Cloud Wars AI Agent about this analysis

Interested in Microsoft?

Schedule a discovery meeting to see if we can help achieve your goals

Connect With Us

Book a Demo

agent ai copilot Cybersecurity featured governance Microsoft risk security vulnerability
Share. Facebook Twitter LinkedIn Email
Analystuser

Tom Smith

Editor in Chief, analyst, Cloud Wars

Areas of Expertise
  • AI/ML
  • Business Apps
  • Cloud
  • Digital Business

Tom Smith analyzes AI, copilots, cloud companies, and tech innovations for Cloud Wars. He has worked as an analyst tracking technology and tech companies for more than 20 years.

  Contact Tom Smith ...

Related Posts

AWS Commits to Digital Sovereignty with Full-Feature EU Cloud

June 18, 2025

Microsoft, AWS, Oracle Lead $1-Trillion RPO Total for Cloud Wars Top 10

June 17, 2025

How ServiceNow and EY Use AI to Merge Brand and Demand in B2B Marketing

June 17, 2025

$1 Trillion RPO!

June 17, 2025
Add A Comment

Comments are closed.

Recent Posts
  • AI Security: Practical Ways Microsoft Users Can Tap Purview to Lock Down Data in AI Use Cases
  • AWS Commits to Digital Sovereignty with Full-Feature EU Cloud
  • Microsoft, AWS, Oracle Lead $1-Trillion RPO Total for Cloud Wars Top 10
  • How ServiceNow and EY Use AI to Merge Brand and Demand in B2B Marketing
  • $1 Trillion RPO!

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.