Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and Copilots
    • Innovation & Leadership
    • Cybersecurity
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
  • Summit NA
  • Dynamics Communities
  • Ask Copilot
Twitter Instagram
  • Summit NA
  • Dynamics Communities
  • AI Copilot Summit NA
  • Ask Cloud Wars
Twitter LinkedIn
Cloud Wars
  • Home
  • Top 10
  • CW Minute
  • CW Podcast
  • Categories
    • AI and CopilotsWelcome to the Acceleration Economy AI Index, a weekly segment where we cover the most important recent news in AI innovation, funding, and solutions in under 10 minutes. Our goal is to get you up to speed – the same speed AI innovation is taking place nowadays – and prepare you for that upcoming customer call, board meeting, or conversation with your colleague.
    • Innovation & Leadership
    • CybersecurityThe practice of defending computers, servers, mobile devices, electronic systems, networks, and data from malicious attacks.
    • Data
  • Member Resources
    • Cloud Wars AI Agent
    • Digital Summits
    • Guidebooks
    • Reports
  • About Us
    • Our Story
    • Tech Analysts
    • Marketing Services
    • Login / Register
Cloud Wars
    • Login / Register
Home » The Cutting Edge: Ultra-Reliable, Lower-Latency Edge Computing
Edge Computing

The Cutting Edge: Ultra-Reliable, Lower-Latency Edge Computing

Leonard LeeBy Leonard LeeJanuary 12, 2022Updated:July 27, 20227 Mins Read
Facebook Twitter LinkedIn Email
URLLEC
Share
Facebook Twitter LinkedIn Email

Welcome to the first article in the Cutting Edge series on edge computing for 2022. I want to start off the new year by talking about the first principles of edge computing.

Why do we do it? Why will we do it? To be honest, edge computing is such a broad and deep topic, it is difficult to discuss the need for it. There are several lenses that can be applied to consider rationale and benefit. I will give it a shot and appreciate your help if you think I got it wrong. We are all in this together.

In my view, edge computing boils down to two things: lower latency and higher reliability. I will borrow from the acronym URLLC (Ultra-Reliable, Low-Latency Communications) from the 5G world to dub these axioms of edge computing URLLEC (Ultra-Reliable, Lower-Latency Edge Computing). I might have coined the tech acronym of the year, though it’s still early of course.

The First Principles

URLLEC might not come as much of a surprise. After all, we frequently hear that edge computing reduces latency versus central cloud computing. The rationale is simple—you place the edge computing workload closer to the endpoint client device or default location. The idea is that you are reducing the number of hops and the distance that light must travel through the network or Internet, which can be significant. The latencies are too high for many industrial applications that require distributed system latencies in the milliseconds range.

We don’t hear too much about reliability when we talk about edge computing. However, I propose that reliability is a first principle of edge cloud computing. Why? Generally speaking, the Internet and telecom networks do not provide highly reliable communications end-to-end between a central cloud data center and an endpoint client device. These networks and most enterprise networks provide best-effort connectivity with no industrial grade assurance. Then again, most business applications don’t really need industrial grade reliability communications.

In the 5G world, there is a tendency to clump low latency (sub-4 millisecond) and ultra-reliability (99.999999% availability) together. I feel it is important to treat these requirements separately. While low latency and reliability are both essential for critical distributed applications that are super real-time, there are many applications that put different weights on latency and reliability that could benefit from edge computing architectures and deployment.

As the networks, especially 5G wireless, continue to advance and become more capable, the threshold for going with edge computing will come down. I suspect as the threshold comes down, we will see innovative new edge computing architectures and applications that present business with new frontiers for building new business capabilities, as well as pioneering digital services and products.

Primary Benefits of URLLEC

What does URLLEC mean for business leaders and end-users of edge computing systems? Let’s tackle each first principle one at a time, starting with latency.

Lower Latency

Better User Experience – Locating edge compute and workload placement in close physical proximity to the endpoint client device can dramatically improve the responsiveness of the applications and the delivery of content. CDN companies, most notably Akamai, were founded on the need to improve the speed of web content to our browsers. Distributing content delivery across CDN nodes distributed across regions dramatically improved the performance experience on websites and web applications.

Today, edge computing is being tapped to make cloud gaming a bit more realistic. Many of the pioneering cloud gaming ventures quickly discovered that hosting games centrally in megaplex cloud was not going to cut it when games require at least 20 millisecond end-to-end latency on a symmetrical network with a 50/50 uplink/downlink configuration.

Improved Distributed System Performance – Lower latency is not always about user experience. Under the hood of business and industrial applications, and consumer applications that have yet to be invented, edge computing architectures can enable faster clock speeds, sample rates, or bit rates across a distributed system. This is particularly important for industrial applications that require near-real time (sub-5 millisecond) latencies.

Nowhere is this reality more obvious than with the emerging cloud-native mobile networks that are taking on distributed edge architectures. You simply can’t run RAN network functions from a central cloud in environments where latency requirements at some layers are less than a millisecond.

Reliability

Availability – As we witnessed late last year, putting all your eggs in the central cloud basket can expose you to the risk of an outage. Edge computing models provide options for architecting redundancy and resiliency into your critical business systems by distributing the data and workloads that you might have committed to the central cloud across edge nodes. In many ways, this is a shift from the “everything is going to the cloud” mindset that has emerged and dominated IT discourse over the past decade.

Edge computing architectures that are possible with emerging edge cloud technologies will allow you to set up failover across “cloudlets” located in proximity but in different locations, while still having the option to fall back to the central cloud. Again, you have more ways of designing resiliency into your edge applications and systems that foster high availability, which is foundational to user experience. After all, if your application is not available, there is no application. Federated edge cloud computing models and frameworks are something to keep an eye out for.

Consistency – An important aspect of reliability is the consistency of performance and quality of service (QoS). Edge computing models put compute and workloads in proximity to the endpoint client device. This gives enterprises the ability to take advantage of deterministic networks that largely only exist currently at the very edge of the network, as they say in telco parlance. We see these kinds of networks on shop floors of auto plants populated with hundreds of precisely timed robots and assembly lines.

Mobile wireless is the exciting new frontier for edge computing. As mentioned in my podcast on MEC, edge computing is becoming part of the mobile networks as part of 5G rollouts. We can expect that 5G deployments will bring about deterministic service zones where network densification is sufficient to provide industrial-grade reliability. This will enable new edge computing applications that support a new generation of mobile computing use cases.

Why Not Throw in the Kitchen Sink?

You may be asking yourself why I didn’t include things like financials, data compliance, capacity, scalability, and security into the set of first principles of edge computing? Well, those factors tend to be application-specific considerations of an architectural decision to go with edge computing. If you think about it, they are not intrinsic edge computing differentiators vis-a-vis central cloud computing. The central cloud will have more capacity and scale out more than any single edge node.

That being said, the factors I excluded from the first principles list must be carefully evaluated as you select technologies, service providers, and design your edge computing system and distributed application. For the moment, I will argue that they are not fundamentally why you will or won’t consider edge computing as a system architecture.

Let me know what you think about my URLLEC idea. I invite and welcome constructive and healthy debate on this. I am anxious to learn from you and your experience. Together we can make sense of edge computing nonsense and find that precious path to value.

Happy 2022!

featured
Share. Facebook Twitter LinkedIn Email
Leonard Lee
  • Website
  • LinkedIn

Leonard Lee is an Acceleration Economy Analyst focusing on Edge Computing for the enterprise market and founder/managing director of neXt Curve, a research advisory firm focused on cross-domain ICT technology and industry research. neXt Curve advises some of the leading technology companies, regulators and enterprises. Leonard has 30 years of experience as a management consultant and industry analyst, Mr. Lee is a former managing partner with Gartner Inc. and partner/principal with IBM and PwC who has advised and delivered emerging technology and digital business solutions to leading enterprises across a broad range of industries and has worked closely with numerous Global 500 companies in driving business innovation and value through digital technologies and reinvention.

Related Posts

C-Suite Perspective: What the AI-Powered Org Looks Like, Today and in The Future

May 15, 2025

AI Maturity Declines Year Over Year, But Leaders Push Ahead on Innovation, AI Skills

May 15, 2025

Microsoft’s Mission to Make Your Company AI First

May 14, 2025

Parisa Tabriz on Google Chrome Enterprise Security and AI Innovation | Cloud Wars Live

May 14, 2025
Add A Comment

Comments are closed.

Recent Posts
  • C-Suite Perspective: What the AI-Powered Org Looks Like, Today and in The Future
  • AI Maturity Declines Year Over Year, But Leaders Push Ahead on Innovation, AI Skills
  • Microsoft’s Mission to Make Your Company AI First
  • Parisa Tabriz on Google Chrome Enterprise Security and AI Innovation | Cloud Wars Live
  • Snowflake Expands AI Data Cloud to Revolutionize Automotive Manufacturing and Data Integration

  • Ask Cloud Wars AI Agent
  • Tech Guidebooks
  • Industry Reports
  • Newsletters

Join Today

Most Popular Guidebooks

Accelerating GenAI Impact: From POC to Production Success

November 1, 2024

ExFlow from SignUp Software: Streamlining Dynamics 365 Finance & Operations and Business Central with AP Automation

September 10, 2024

Delivering on the Promise of Multicloud | How to Realize Multicloud’s Full Potential While Addressing Challenges

July 19, 2024

Zero Trust Network Access | A CISO Guidebook

February 1, 2024

Advertisement
Cloud Wars
Twitter LinkedIn
  • Home
  • About Us
  • Privacy Policy
  • Get In Touch
  • Marketing Services
  • Do not sell my information
© 2025 Cloud Wars.

Type above and press Enter to search. Press Esc to cancel.

  • Login
Forgot Password?
Lost your password? Please enter your username or email address. You will receive a link to create a new password via email.