Running AI computations seems best suited for central servers and not something to be performed directly on sensors and small devices deployed in the real world that may even be disconnected from the cloud; in other words, on the edge. But we thought the same thing of the internet in its early days, and not only has the internet become feasible, it’s become the norm. To boot, there are plenty of benefits gained by adding internet capability to edge devices in the form of everything from smartwatches and smart home systems to cars and tea-making machines.
The Advantages of Edge AI
Edge AI, which is the distributed computing procedure of running artificial intelligence (AI) models locally on devices out in the world closer to where the data is actually gathered, rather than on central servers, is a technique quickly gaining traction in different verticals. This is because Big Data, Internet of Things (IoT), and hardware innovations are increasingly pushing the limits of what’s possible and what’s required. Edge AI has a few distinct advantages over traditional cloud-based AI computation:
- Latency: If you unload the costly computing process to the cloud, you take on a new challenge: communication. In systems where you want the edge device to be enabled by AI models’ intelligence, all the raw data and output results need to be sent back and forth between the edge and the server. This requirement places enormous strain on networks and always incurs latency, which may not be acceptable in certain applications such as time-critical medical procedures or real-time computer vision in, say, object detection for collision avoidance in cars.
- Bandwidth usage: As more edge or near-edge devices rely on the power of AI to drive results, such devices would occupy an increasing percentage of the total bandwidth available to the network if computation is offloaded to the cloud. That bandwidth is now unavailable to other applications that require it.
- Cost: This one’s self-explanatory. Constructing data pipelines through networks and servers can be a costly undertaking.
- Privacy concerns: Many AI systems deal with sensitive data. If that data is sent onto a server, it’s only fair to ask what the owners of that server are doing with it. Plus, since data has to be sent through a network, you run the risk of hacks, interference, and eavesdropping unless you invest significant resources in securing the pipeline.
The solution to these problems is performing AI computation closer to the edge, directly on the devices, which take action based on the results. This is possible because AI models are becoming more lightweight at the same time that hardware is becoming stronger. What seemed like a pipe dream several years ago — AI on the edge — is rapidly becoming industry standard.
Edge AI doesn’t mean cutting all communication between server and edge. In fact, quite the opposite. In so-called edge-to-cloud systems, the bulk of AI processing takes place on the edge and processed insights from multiple devices are aggregated in the server. Updates and security measures can be communicated from server to edge, and the model itself can be re-trained server-side and communicated to keep the edge, well, at the cutting edge.
Edge AI Use Cases
- Robotics: Unmanned vehicles like drones, robots, or self-driving cars need AI in tasks like real-time computer vision or classification. If these devices are deployed in environments with limited connectivity, which they often are, and also require real-time analysis, they would benefit from edge AI capability. The combination of drones and edge AI, for example, can be used to analyze traffic, weather conditions, remote area research, or agriculture.
- IoT: Adding edge AI functionality to Internet of Things systems is extremely powerful. It allows you to distribute computation across the network and allow your edge devices to make smarter decisions in real time. This has enterprise applications such as inventory management and public infrastructure applications such as monitoring the energy grid.
- Next-generation consumer devices: Extended reality devices, in particular, benefit from a distributed computing approach. Augmented reality headwear that overlays virtual content onto the real world requires real-time computer vision to function. Not to mention the additional functions of a consumer device that might use AI, from NLP to voice recognition.
Several companies are developing in the edge AI space, and it’s more feasible to get started today than it ever was before. Computer vision company viso.ai built a software platform to power edge AI vision tasks. Dell EMC, AWS, and IBM offer a range of enterprise-ready edge AI IoT solutions. Google developed the Edge TPU hardware set, which is capable of performing AI computation on-device and also integrates with Google Cloud.
Wrap-Up
All in all, the ability to draw AI computation closer to edge is key in implementing more efficient, capable, and automated data systems across industries. Edge AI is unlocking a new class of smart-edge devices, from consumer-facing extended reality (XR) headsets to IoT sensors to autonomous robotics. While it won’t replace fully cloud-based AI, it will certainly play a bigger part in cloud scenarios moving forward. Smart leaders must continue to watch the space.
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: