At the dawn of the computer age (1960s-1980s), we bought—or leased, mostly—giant, expensive computers that required frigid air conditioning, chilled water, and even special electricity. We put these ‘mainframes’ into expensive, purpose-built facilities. We sometimes built entire buildings, but often built out floors inside hideously expensive Class AA office buildings in top-tier cities.
Mainframes were accessed by specialized terminals—the earliest of which were essentially customized typewriters—that were connected directly to the mainframes (‘hard wired’) over specialized cables. This arrangement was so complicated and expensive that early terminals were grouped into purpose-built, over-air-conditioned ‘Terminal Rooms’ that required users to walk or drive to access a terminal.
By today’s standards, this arrangement was clumsy, inconvenient, expensive—laughable, right? But these attributes provided one thing today’s CIOs and CISOs envy: they were incredibly secure!
Even major organizations had but a handful of mainframes installed behind thick walls. Also, these were accessed only by a lab-coated priesthood of highly trained Operators and System Programmers, who had to traverse a ‘mantrap’[1] to get near the mainframes, data storage (disks and tape drives), and the ‘patch panels’ (think old-fashioned telephone operators). These connected local terminals and sometimes major branch offices into the organization’s proprietary network
Inter-connected organizational networks were initially rare outside academia and the military. As forward-thinking industries, like banking and securities, began to connect trading partners, these interconnections were rare, mostly proprietary in nature, and as secure as the mainframe datacenters.
Four Areas of IT Evolution
From the 1980s to 2010 or so, at least four aspects of IT evolved—and by doing so, drove a wave of automation—of productivity, customer, and employee experience—the world had not seen before:
- Terminals become smarter, then became an application running in PCs—millions and billions of PCs.
- Mainframes multiplied and were augmented, then supplanted, by hordes of simpler and more commoditized ‘servers.’ These server hordes were built using a version of the Intel chips that were powering those vast numbers of PCs.
- Mainframes shrunk in size and became ‘office machines’ that could be installed—along with servers—in less specialized, less secure facilities.
- Networking technology—both internal (Local Area) and external (Wide Area)—exploded, making it possible to connect users to mainframes and servers across the world while enabling vast numbers of inter-organizational networks and even an ‘Internet’ that connected everything and everybody.
Computing went from a small-town model—sparsely settled, where everyone knows their neighbors and everybody else’s business, and strangers were regarded with suspicion—to a metropolitan model—bustling, busy, anonymous, mind your own business. The security implications of such a change are fraught, and the change enabled computer hacking to go from a mostly innocent prank to a global criminal enterprise.
As hackers went on the offensive, a countervailing cybersecurity industry emerged:
- Firewalls protected your secure internal network from outside attackers.
- Anti-virus software protected your servers and PCs from attackers
- Virtual Private Networks (VPNs) protected communication between your secure internal network and remote devices, as well as communication between one secure internal network and another.
And thus, good triumphed over evil, and we were safe from those pesky hackers. Right? Very, very wrong, as many of you know from bitter experience.
Troubling Assumptions About Tools
Why didn’t these well-designed and carefully built tools protect us from evildoers? Because the basic assumptions that we carried over from the early mainframe days became dangerously incorrect. And the rise of cloud computing and the Internet of Things (IoT)[2] made these old assumptions even more dangerous. The key assumptions that got us into trouble over 50 years include:
- Inside our organization[3], things are trusted and safe; outside our organization, here be dragons.
- Computing uses a hub and spoke model: a small number of our organization’s mainframes and servers represent the high-value hubs on which all data and all computing resides, and PCs are spokes that merely input information and retrieve it.
- Access to resources (data, computing, and networking) can be adequately controlled by verifying a user’s[4] identity.
- Once you’re given access to a resource, you get broad access to the resource and keep that access as long as you’re connected.
In today’s reality:
- There is no ‘inside’ and no ‘outside’—a user might be anywhere, using any hardware, application, and network, to access resources within your organization or anyplace else in the world.
- Every device, application, and network is high-value and needs to be managed and defended.
- ‘Who you are’ isn’t nearly enough to validate your request for resource access.
- Resource access should be as limited as possible, granting you just enough to complete the task at hand and only as long as it takes to do so.
Zero Trust Security Architecture
Enter ‘Zero-Trust Security,’ a new way of thinking about how we protect technology assets. A Zero-Trust Security Architecture…
- Treats access as a dynamic, negotiated transaction for every access session. It evaluates who, where, how, when, and why before access is granted and continues evaluation for the duration of the session.
- Assumes that every participant in a session is untrustworthy until it proves itself safe (and continues verifying throughout a session).
- Grants ‘least privilege’ access—no more access than what is needed for a task.
- Encrypts everything all the time and only decrypts as necessary to move information between authorized participants.[5]
Zero Trust According to the NIST
The above is a very simplified summary of the 59-page NIST (National Institute of Standards and Technology) Special Publication 800-207 ‘Zero-Trust Architecture’ document that your CISO and CIO have (hopefully) read. A few examples will show how it plugs the holes I described.
1. Signing In
A. Traditional — You enter an ID and password. If it’s a match, you’re in.
B. Zero Trust — You[6] enter an ID and one or more credential ‘factors’[7]:
- Your ID + credentials are verified.
- Your specific role-based and individual access rights are retrieved
- 3. The device you’re signing in from is interrogated. Is it an organization-owned device? Does it have specific security software and specific patches?
- Your location or network is examined. Are you at an organization office? Which office? Is it regular working hours at that office?
- Your ID + what (device) + where (network/location) + when are used to determine whether to let you in and at what level of trust you are.
C. Examples of holes closed by Zero Trust:
- Someone stealing your ID or password is stopped by not having your device or not being in the right place at the right time.
- The organization is protected from attacks launched from a BYOD or unpatched PC.
- Some access may be appropriate only when one is at a specific location (Sending a billion dollars wired from the Bank Wire Room or shutting down a power plant from the Control Room), during one’s shift, or from a specially secured device.
- Mitigates poor ‘IT Hygiene’ that leaves critical assets unpatched (and thus provides a path for attackers).
2. Accessing Data (Spreadsheets, Documents, and Databases)
A. Traditional — If you have access to the server and folder and file, you can open, copy, rename, print, and email it.
B. Zero Trust — Access can be fine-grained. What you’re allowed to do with a server, folder, or file depends on the program you’re using plus all the constraints above. For databases, restrictions can be set on access to specific rows and columns.
C. Examples of holes closed by Zero Trust:
- Accidentally or deliberately deleting or altering data is much harder.
- Ransomware behavior (rewriting many files as they’re encrypted) is much harder.
- A SolarWinds-type breach, where a network monitoring tool sent masses of client data to outside servers, would be blocked (because network monitor apps shouldn’t be accessing data files).
Final Thoughts
Zero Trust offers many other protections, but these should whet your appetite to dig more deeply into Zero Trust with your CIO and CISO. Note that Zero Trust isn’t a product one can buy: it’s a fundamental shift in our approach to accessing technology resources. As such, it will take time, money, and energy to implement. But it’s not an all-or-nothing project, so get started ASAP to start reaping the benefits of Zero Trust
- Please forgive the use of contemporaneous terminology ↑
- AKA ‘Industrial Internet’ and ‘Operations Technology’ (OT) ↑
- ‘Inside’ our physical walls became ‘inside’ our firewalls, but the concept was the same ↑
- Also, that ‘users’ are always people while today users are more likely to be devices and programs ↑
- This has been a best practice for years, but it’s ignored enough that I made it explicit ↑
- ‘You’ in the ZT environment are not just people: every asset must be identified and authenticated when it becomes active and during its activity. ↑
- Three factors are ‘What You Know’ (password); ‘What You Have’ (token); ‘Who You Are’ (biometric) ↑
Want more tech insights for the top execs? Visit the Leadership channel: