While the cloud is starting to dominate information technology spending, most CTOs still oversee a technology stack that started with a traditional architecture: racks of servers, storage, and networking gear housed in a data center — your own purpose-built and self-managed facility or a portion of a co-location site with varying degrees of self-management and outsourced support.
Maybe you even carefully designed your legacy environment and rigidly oversaw the evolution of its components. More likely, you showed up on your first day at the new gig and inherited a dog’s breakfast of elements that worked more or less in concert to get the organization’s work done (ask me how I know).
It’s up to you as the CTO to support ever-growing business requirements with a future-focused information technology (IT) architecture and plan. And that almost always means thinking hard about movement to the cloud: not “if,” but “when” and “how.”
However, even when a CTO has a coherent vision of the future and is totally committed to a successful cloud migration, reality intrudes: Technology vendors wax and wane, promised enhancements don’t pan out, or some number of acquisitions have to be integrated quickly and cheaply. In the words of the great management thinker, Mike Tyson, “Everyone has a plan until they get punched in the mouth.” Regardless of the twists and turns you face, it’s imperative you get on with your planned migration lest your organization gets left behind.
To keep motivated and ensure cloud migration is a company priority, CTOs must understand its critical benefits:
Leverage Hyperscalers’ Security Investment
Hyperscale cloud vendors (hyperscalers) have made considerable investments in their physical security, so assets moved there should be safer than in your current data center.
Hyperscalers have also made huge investments in cybersecurity tools for detection, prevention, isolation, repair, and recovery. It will take some work to integrate hyperscaler tools and processes. It may also require investment in optional (i.e., platform-as-a-service or PaaS) products and services but moving to higher levels of security in the cloud is relatively easy.
Multi-Site Failover Improves Availability
Suppose you run your own infrastructure with legacy technology. In that case, you’re used to thinking about recovery sites (cold, warm, or hot) and the complex dance required to “declare a disaster” and then invoke your “disaster recovery cookbook” to shift operations to an alternate site.
Hyperscale architecture fundamentally differs from legacy architecture in that it supports workload movement — on the fly and with minimal disruption — from primary sites to alternate sites. This movement may be almost invisible and automatic if you’re running software-as-a-service (SaaS) applications or PaaS “serverless” components. You’ll be much more involved if you’re running your own components on infrastructure-as-a-service (IaaS).
Multi-site failover isn’t free: You’ll pay for local, regional, or global failover. And you’ll have to re-architect some or all of your apps to some extent to take full advantage of workload failover. But your users should see fewer outages, and your IT team will save lots of time maintaining and testing their “disaster recovery (DR) cookbooks.”
Hyperscale cloud architectures allow the CTO to architect what I call DAC (Distributed Active Capacity) into workloads. With DAC, the production workload is hosted at multiple locations, so a failure at any site doesn’t shut the workload down. At worst, some fraction of components stop working, so things slow down until repairs happen. With hyperscale’s inherent scalability, you can contract for temporary “burst” capacity that’s spun up almost instantly as needed, so users see little or no disruption.
Nota Bene: Regardless of super-high hyperscaler uptime stats (99.999%), when a hyperscaler fails, it usually fails spectacularly (millions of affected users across a wide geographic area). A wise CTO always prepares for a worst-case scenario, even when the cloud is involved.
Handle Peak Loads With Greater Scalability
The original promise of hyperscale cloud was, “You can buy everything by the drink.” If you need a baseline of 10 servers all but one day per month, or you need 20, no problem. Or if you have seasonal peaks and valleys, you’re covered. Even intra-day or intra-hour peak loads can be handled by the cloud (and if you’re running serverless or SaaS, you think about transactions or users and forget about scalability).
What many hyperscale customers learned through bitter experiences was that scalability is a two-edged sword. Like most other resources that someone buys and maintains to sell, the more predictable your usage, the better price you get. So best practice is to contract for your baseline (or some percentage, like 80%) as a minimum quantity at a fixed cost. Then contract for variable (or “burst”) tranches separately (The first 20% over baseline will be needed 50% of the time; the next 20% will be required 5% of the time; anything over 40% is as needed).
Another downside to easy scalability is “cloud sprawl”: since users can activate a cloud instance in seconds for a few minutes or hours as needed (e.g., for a unit test environment) rather than maintaining lots of extra on-prem environments, which seemed thrifty. But customers discovered that lots of (variable cost) cloud resources were left running, with charges accruing: not a prudent business outcome.
Support Growth and Bursty Workloads
If you’re a CTO, the term “headroom” is common parlance. It’s the computer resources you set aside to accommodate business growth, variable testing needs, spare capacity, “bursty” workloads, and even M&A needs. My rule of thumb has been to allow 40% headroom across the environment.
That means you acquire, install, power, cool, and maintain 40% of spare hardware and network capacity to minimize slowdowns and outages. IaaS and PaaS cloud environments don’t need any extra unused capacity because they will add resources as needed within seconds or minutes. If the CTO pays only for used capacity and ‘bursts’ as required, baseline cost can be far lower than just “lifting and shifting” might presume.
Which companies are the most important vendors in data? Click here to see the Acceleration Economy Top 10 Data Modernization Short List, as selected by our expert team of practitioner-analysts
There’s a myth that the “cloud turns fixed costs into variable costs.” Not quite. As cloud computing went mainstream and consumed more of the IT budget, forgoing (CapEx) asset acquisition in favor of (OpEx) licensing was distorting financial statements. So the FASB (Financial Accounting Standards Board) leveled the playing field with a series of ASUs (Accounting Standards Updates) starting with ASU 2018-15 and adding further clarifications. (I am not a CPA, and this isn’t financial advice — talk with your auditors).
Reduce Technical Debt, Rely On Others to Stay Up to Date
As readers may know, here’s my favorite topic: technical debt (here’s what I had to say about it in the Wall Street Journal), specifically technical debt mitigation through hyperscale cloud migration. Short version: moving responsibility for hardware, networks, and low-level software to a cloud vendor (IaaS), or IaaS plus tools (PaaS), or entire applications (SaaS) means someone else is keeping your technology up to date—somebody who does it regularly and for millions of users.
Again, the CTO can’t wash their hands of responsibility for technical debt, but heavy lifting can be outsourced to hyperscale experts.
Final Thoughts
For most organizations, the cloud provides such significant benefits — scalability, availability, integration, access to tools, technical debt mitigation, to name a few — that it’s time to move off legacy platforms and into the cloud. That’s not to say the journey will be quick, cheap, or easy for CTOs saddled with complex, under-documented IT stacks. But the longer you wait, the more technical debt piles up, and the more complex your environment will become.
So now’s the time to build a thoughtful, properly funded plan; obtain buy-in from your C-suite (and perhaps board of directors); and treat this migration as a major Enterprise Change Management program with commensurate reporting and controls. The great thing about cloud migrations is that licensing is incredibly granular: You can get started with a very small incremental investment and without long-term commitments.
If you’re still hesitant, remember the useful advice available online. Here at Acceleration Economy, we’ve published hundreds of articles, most of which were written by folks like you, who have “been there and done that.”
Get started with cloud to start unlocking value for your teams and your customers!
Want more insights into all things data? Visit the Data Modernization channel: