
There isn’t just an explosion of AI apps that tech, business, and security leaders need to contend with: many of those apps are of the “shadow” variety, never approved by IT and security teams, therefore complicating efforts to balance AI-driven innovation and the threat of exposure.
Those are key takeaways from a new, troubling report called the “AI Tightrope” from Harmonic Security.
The Harmonic report finds that companies average 254 AI apps in use while 45% of sensitive data submissions to AI apps are emanating from personal accounts, not their corporate accounts.
Security expert Chris Hughes, CEO of Aquia, says the data “reflects broad adoption across departments, with usage patterns that most security teams don’t observe.” Hughes noted in a LinkedIn post that the trends cited by Harmonic dovetail with what he experiences in his business, including the use of personal accounts, not because employees don’t care about security controls “but because the path of least resistance is outside official controls.”
The result, Hughes observes, is loss of visibility from a security perspective and a breakdown in trust boundaries.
But pressure to capitalize on AI suggests these findings should not come as a huge surprise. 74% of CEOs believe they could lose their jobs within two years if they fail to deliver measurable AI outcomes, according to data from Intuition included in the Harmonic report.
Shadow AI Is Prevalent
The quantification of AI apps in use is startling to say the least, even more so because of the prevalence of such apps in the “shadow” category. Among the sensitive data submissions from personal accounts referenced above, 58% of such submissions are coming from Gmail, a clear indicator of the breadth and depth of AI activity outside of IT oversight.
The report notes that “AI tools are just too appealing for employees to use and they will go to extreme lengths to get their hands on them – even without approved licenses.”
That point aligns closely with comments recently made by BDO board member Kirstie Tiernan who recounted a client meeting where many employees brought two laptops, one being a corporate-issue system, the other being a personal system on which they ran Shadow AI apps.
“When you’ve had a taste of some of these tools, you can’t expect that people aren’t going to try to use them, whether it’s on their cell phone, underneath the table, or whatever it is,” Tiernan said. “You have to make sure that you’re enabling them to do so and instituting the right policies and security to be able to keep up with the expectations of your employees.”
The Harmonic study brings the risks borne of current practices into sharp focus.
For instance, although the percentage of prompts deemed to be sensitive declined to 6.7% in 2025’s first quarter from 8.5% in 2024’s fourth quarter, the exposed data by category paints a mixed picture:
- Sensitive legal and finance data that was exposed doubled to 30.8% from 14.9% in the same time period
- Sensitive code nearly doubled to 10.1% from 5.6%
In terms of good news:
- Exposed customer data dropped sharply to 27.8% from 45.8%
- Employee data dropped to 14.3% from 26.8%
Personally Identifiable Information (PII) was tracked explicitly for the first time in the first quarter of this year; 14.8% of prompts included PII.

“When you’ve had a taste of some of these tools, you can’t expect that people aren’t going to try to use them…You have to make sure that you’re enabling them to do so and instituting the right policies and security to be able to keep up with the expectations of your employees.”
–Kirstie Tiernan, BDO
Common Entry Point
Exactly where is all that sensitive data being entered? ChatGPT is far and away the most popular tool being used; it’s the destination for 79% of all sensitive data. Of further concern from a security perspective is the fact 21% of sensitive data was entered into the free tier of ChatGPT.
Images dominated uploads to ChatGPT over this period, with 68.3% of all file uploads. After images, the file types break down as:
- PDF: 13.4%
- Word Docs: 5.5%
- Excel: 4.9%
Compounding the situation as this Shadow AI activity plays out: 55% of organizations still lack formal governance structures for AI applications and initiatives.

AI Agent & Copilot Summit is an AI-first event to define opportunities, impact, and outcomes with Microsoft Copilot and agents. Building on its 2025 success, the 2026 event takes place March 17-19 in San Diego. Get more details.
Closing Thoughts
That last point is critical in understanding that the exposure risk that companies face is not only from employees going rogue, but the companies themselves not moving fast enough or proactively enough to both put those governance structures in place and adopt realistic policies and approaches that reflect employees’ demand to use ChatGPT and other tools for both personal growth and company performance.
Harmonic, for its part, calls for shifting to more proactive controls (think: formal governance structures largely lacking as noted above), the types of automated enforcement that AI enables, more aggressive monitoring for use of unauthorized apps, and restrictions on personal account usage.
Here’s hoping these eye-opening findings serve as a call to action to step up the level of vigilance without closing the door to AI usage that is clearly going to take place whether the door is open or closed.
Editor’s Note: Harmonic’s data is based on usage patterns of a sample of 8,000 end users (anonymized and aggregated) across departments within Harmonic’s customer base, with data collected from the Harmonic Protection browser extension between Jan. 1 and March 31, 2025.
Ask Cloud Wars AI Agent about this analysis