Data loss prevention (DLP) sounds like a great idea. Protecting data from loss, tampering, or theft is a central part of the security practitioner’s day-to-day job. Anyone who has ever gone through a DLP solution deployment likely has the metaphorical battle scars and war stories to prove it. DLP solutions, regardless of what the marketing whitepapers say, are notorious for breaking things, being false-positive-prone, and generally leading to frustrating conversations about return on investment (ROI).
Throughout my own career, on multiple occasions, I have experienced my fair share of these frustrations with DLP tools and DLP functionality that exist in the cloud access security broker (CASB) solution space. This article will dig into how DLP could be putting you at risk, especially with poorly written policies or tool selection.
False Sense of Confidence
The technology world continues to evolve rapidly. Network boundaries are collapsing, cloud service consumption is on the rise, users are interacting with tools and data from a more diverse set of devices, and so on. These dynamics all make DLP and CASB tools harder to integrate in a way that gets them maximum coverage across all relevant devices, networks, applications, or data types.
When coverage is limited for these types of tools, or when DLP policies can only be written to cover a subset of relevant data in the organization, it creates a false sense of security. Assumptions can (and likely will) be made that these deployments are making a much bigger impact on risk management than they actually are. When those assumptions lead to further investments or projects not being pursued in the way that they should be, they become dangerous.
Operational Strain
False positives coming out of any tool put a strain on the teams trying to triage and respond to them. DLP solutions are notorious for producing a lot of false positives, largely because data security is in many ways contextual. Sometimes policy can be interpreted in a binary way, say, that a particular kind of data should never be in a particular kind of system or network. Often though, there’s more context required to determine whether or not a particular event (or cluster of events) is a security issue. Or, phrased a better way, how risky a particular event is.
The challenge here is that a centralized security operations center (SOC), or whatever function on the security team is managing alerts, lacks, by design, much of the context required to properly triage alerts from a DLP solution. Gaining this context takes a tremendous amount of time which puts strain on the operational capacity of the SOC and on those they are coordinating with.
Broken Functionality
One of the common deployment patterns for DLP and CASB tools is to install an agent on a device. From here, not only can local file system scans occur but also network traffic can be proxied for potential inclusion of data-matching DLP policies. If matches are identified, then alerts can be triggered, or, in some cases, the data can be deleted, or the network traffic blocked. This is, of course, dependent on the solution and the agent’s capabilities.
The problems arise when legitimate tools and functionality are broken on account of this scanning. Users’ work is impeded even when they’re not breaking any rules or violating any policies. While this may not be a security risk right now, it’s operational. The security risk comes when a resolution can’t be found, the debugging continues, the frustration builds, and then eventually the DLP tool is just turned off or uninstalled. At this point, there are also residual trust issues and frustration.