In recent times, the narrative around artificial intelligence (AI) has been dominated by the emergence of large language models (LLMs) like ChatGPT and Bard. These sophisticated systems primarily operate from centralized cloud providers. Even though they might be accessed via APIs, making them seem embedded within other applications, their actual processing occurs in the cloud at the centralized server’s location.
For many applications, centralized processing isn’t an issue. For instance, using ChatGPT to understand a new technological domain offers quick and insightful responses. However, this centralized approach doesn’t fit all scenarios. In many cases, AI needs to be closer to the end user for optimal performance. In sectors like manufacturing, this could mean having the AI operate directly at the manufacturing facility or warehouse. For certain devices, the AI might even need to run on the device itself.
I encountered one of the drawbacks of centralized AI recently when I was trying to use my Amazon Echo portable speaker while I was out by the pool. The problem was that I was not within WiFi range, and so any commands that I gave to Alexa were met with an “I can’t connect to the Internet” message. I couldn’t even make it pair with phone over Bluetooth without being connected to the Internet.
A similar problem happened recently in San Francisco, when 10 driverless taxis stalled due to wireless bandwidth issues. It resulted in backed up traffic, causing gridlock for several blocks. You’ve probably been at a concert or sporting event when everyone was trying to use their smart phone. The Internet bandwidth pretty much grinds to a halt. If any AI process was relying on that bandwidth, it would not work.
Moving Compute Power Closer to Workloads
Companies are recognizing the limitations of centralized AI and are leveraging advances in networking, algorithms, and edge computing to run AI workloads closer to where they’re most needed. This shift is driven by the high computing demands of AI, which have traditionally been met by large data-center facilities.
Emerging systems that distribute these workloads across interconnected computers offer potential benefits, including reduced costs and decreased latency. This approach, known as decentralized clouds or distributed data centers, often integrates with the Internet of Things (IoT) and edge computing.
Take DHL Supply Chain as an example. They use a decentralized cloud system for AI-powered computer-vision applications in their warehouses, allowing their robots to efficiently identify and handle packages without relying on an external cloud provider. Similarly, Estes Express Lines uses a decentralized system for AI-enabled vision software in truck-mounted cameras, alerting drivers to road hazards in real-time.
From a manufacturing perspective, the move to edge AI is not just about efficiency but also about resilience. As the CIO of a manufacturing organization, I’ve witnessed firsthand the challenges of relying solely on centralized cloud servers. While we’ve managed to establish reliable internet connections for our facilities, there are times when we face connectivity issues. An over-reliance on centralized cloud servers can lead to halted production or shipping delays during these disruptions.
To mitigate these risks, we’ve invested in edge computing resources. While they don’t replace every cloud function, they ensure continuity in operations during internet service disruptions. Moreover, certain functions, like real-time defect detection in manufacturing, demand instantaneous processing and low latency. Centralized AI applications might struggle to meet these requirements, but edge AI can deliver.
Cost Considerations
Navigating the shift of AI to the edge comes with its own set of financial considerations. On the upfront side, there’s a tangible investment in specialized edge devices equipped to handle AI tasks. Maintenance expenses could go up, especially when devices are stationed in more remote locations. Additionally, adapting AI models for these devices could require some additional software development.
There could also be some reduced costs to offset those increases. By reducing dependency on cloud services, there’s a noticeable drop in associated costs; and processing data locally means less data shuttling to and from the cloud, which can ease bandwidth expenses.
As the volume of data and the need for real-time processing continue to grow, the shift from centralized AI to edge AI is becoming more pronounced. For industries like manufacturing, where every second counts and disruptions can be costly, edge AI offers a promising solution to maintain efficiency and resilience.