
IT and security leaders are ramping up their use of GenAI with a focus on protecting corporate assets against risks that arise from employees using personal security credentials and devices to access AI applications.
Those are key findings from Microsoft’s latest Data Security Index that captures more than 1,700 survey respondents, including 300 from the US who work across industries but work in companies with 500+ employees.
One IT director in the energy industry who was interviewed, but not identified by name, laid out key benefits that GenAI provides in combating evolving security threats and rogue or shadow AI usage by employees: “Our GenAI systems are constantly observing, learning, and making recommendations for modifications with far more data than would be possible with any kind of manual or quasi-manual process.”
Those twin benefits of vast amounts of data for analysis and scalability compared to human labor are recurring themes throughout the report. I’ll highlight what I view as the most important takeaways from the report below, starting with how employees are using – or IT/security pros would likely say misusing – GenAI tools and creating risks that the former must address.
The Drive to Innovate and Be Productive
Those on the leading edge of AI may assume that employee usage has matured to a point that it’s highly structured and broadly governed by corporate controls, which may suggest rogue usage of these tools is on the decline. But the data from Microsoft indicate that’ just the opposite is taking place’s not the case.
The percent of security leaders reporting that employees using personal credentials, rather than corporate identities, to access GenAI to do their work rose in 2025, to 58% of respondents vs. 53% in 2024. At the same time, the percent of companies saying employees use personal devices to access GenAI for work rose from 48% in 2024 to 57% in 2025.
And 32% of survey respondents note that data security incidents involve the use of GenAI tools, and 35% of those surveyed expect a higher volume of incidents in the coming year resulting from GenAI usage.
Because of these activities, 47% of companies surveyed say they’re implementing GenAI-specific controls, up from 39% in 2024.
It’s insightful to know where – and how – security leaders are responding to the threats as well as the GenAI-related controls on which they’re placing highest priority. The controls they’re looking to enforce focus on protecting data as GenAI continues to spread, boosting employee skills and knowledge, and monitoring activity for bad behavior.
One CISO quote captures the mindset — exerting controls more so than limiting access — well: “We’re working to block GenAI tools that are not authorized but also increase what is authorized and steer people to that.” The chart below provides more detail on what controls are taking precedence:
| GenAI-related controls | % of respondents prioritizing |
| Prevent upload of sensitive data into GenAI tools | 42% |
| Train employees on secure use of GenAI | 38% |
| Detect anomalous user activity and risky users | 37% |
| Identify sensitive data being uploaded to or generated by GenAI | 37% |
AI-Powered Protections
In response to the sharp increases in employees using GenAI without the required controls, business and tech leaders are ramping up their own use of AI and agents, and they’re sharing insight into the ways they’re using AI to tighten security and governance.
At a top level, 82% of those surveyed say they’ve developed plans to use GenAI in their data security operations, and that figure is up from 64% of respondents in 2024 — an increase of 28%. And while 39% are currently using agents for data security, a much higher figure, 58%, said they are piloting or exploring the use of agents for data security, indicating that much greater adoption is in the works.
The specific agentic AI data security use cases they cite are illuminating as well, and certainly align with their efforts to prevent incidents or breaches based on the employee AI usage outlined above:
| Agentic Use Cases for Data Security | Percent of respondents |
| Detect critical risks | 40% |
| Automatically protect, block, flag, and classify data | 36% |
| Investigate potential data security incidents | 35% |
| Make recommendations to better secure data | 35% |
| Reduce false positive alerts | 35% |
Following these data points, Microsoft also made recommendations on a “path forward” that includes use of GenAI agents to accelerate response and reduce noise, and doing so while leveraging agents because they “offer scalable automation for data discovery, protection, and remediation.”
One final note: the research, and Microsoft’s analysis, make the case for unified security platforms that reduce tool sprawl and resulting views of security data that are fragmented, noting that 64% of respondents expect improved threat detection and response as a result of unifying their platforms, while 56% anticipate better visibility into data risks across workloads.
My colleague Kieron Allen will provide more context on the findings later this week, and below are a series of related analyses with additional insight:
- Microsoft Taps Power of AI To Expand Breadth, Depth of Security Investigations
- Report Outlines Tangible Ways to Fight AI-Powered Attacks – With AI
- With Agent 365, Microsoft Equips Customers to Govern AI Agent Estates
- With AI Infusion, Microsoft Positions Sentinel as Unifying Security Platform




