What is the Edge and why are we all talking about it?

Edge Computing Today

Article from Issue 234/2020
Author(s): , Author(s):

After the cloud came the Edge. We take a look at the Edge computing phenomenon and attempt to assess what all the fuss is about.

Edge Computing is a popular term in the high tech media, and, like many buzzwords that rise to claim a place in the limelight, the term "Edge" appears to have emerged fully formed before the industry settled on a clear definition for what it is. So what is Edge computing, and how does it differ from other approaches? How is Edge computing related to IoT or other contemporary technologies? We decided it was time for a visit to the Edge.

Beyond the Cloud

For years now, large cloud providers have attempted to entice customers with the benefits of managing their data and infrastructure from a central, cloud-based location. For many companies, the cloud means abandoning their own on-premises data center and instead trusting their data to AWS, Azure, or another cloud company. The goal of the cloud is centralization. The data is all in one place, managed by experts with economies of scale. Security, fault tolerance, and other essential tasks are handled from the central facility. In many cases, all the data and processing power for an entire company might be in one or two locations, with branch offices accessing it from all over the world.

Edge computing is the exact opposite of this centralized cloud paradigm. The goal of Edge computing is to provide similar cloud-like services, but to move computing resources to the farthest edge of the environment – geographically close to where the data is gathered and accessed.

From CDNs to the Edge

The idea of placing computing resources at the edge of the network is nothing new. Around 20 years ago, the Web 2.0 era ushered in a new vision for the Internet. The volume of data skyrocketed quickly as new services for social media, images, and video entered into common usage. The industry soon found that it was unable to develop new higher-capacity hardware at the speed the market was asking for it.

As early as the end of the 1990s, the idea of Content Delivery Networks (CDNs) was born. CDNs followed a very simple principle: Instead of keeping the movie on a server in the USA, a company installs infrastructure closer to the customer and keeps a copy of the video closer to where the customer lives. Keeping the connection confined to a small region reduced latency, and it also simplified the communication path, requiring fewer routers and less overall traffic to deliver the data to the user.

The original CDN systems were primarily designed to offer storage, but today's Edge environment requires a much more elaborate palette of services. New technologies such as robotics, Internet of Things, remote sensing, and realtime monitoring handle lots of data, but they also require lots of computing power.

Home automation is a good example. Classic household appliances are replaced with state-of-art versions that provide Internet access and can be controlled remotely. Decisions occur beyond the device, and the results are transferred back to the device through simple commands. Voice-activated tools like Alexa offer additional complications. When a user talks to Alexa, it fields the command and sends the audio file to a server, where it is analyzed and interpreted. Alexa then receives the command back in machine language so that it can initiate an action. The longer the distance between Alexa and the server that interprets the command, the more sluggishly Alexa behaves.

A home assistant turning up the thermostat can probably afford a little latency, but consider a robotics installation on a factory floor or a set of sensors that monitor environmental data and make complex decisions in real time. These scenarios would potentially benefit from some form of cloud-like consolidation of processing power, but the idea of sending every command and sensor reading to a massive server in another region of the country hundreds of miles away is severely limiting and, in some cases, totally unworkable. The Edge offers a framework for imagining how the same technology would work with lots of mini-data centers scattered around wherever they are needed, instead of a massive data center serving a radius of a thousand miles.

An Edge Example: Autonomous Driving

All the major car manufacturers have been researching autonomous driving for years, and various Silicon Valley companies have also explored the possibilities of cars without drivers. Autonomous driving is a good example of why the experts believe Edge computing will figure so prominently in the future of IT. Clearly, it would not be practical for a central cloud infrastructure to process telemetry data and traffic information for a large region. If data from a car in North Dakota is first sent to the data center in New York, where it is evaluated and converted into instructions, which are then sent back to the car in North Dakota, the information would be out of date before it reaches the vehicle.

Letting autonomous vehicles crunch their own data does not appear to be a meaningful alternative. Evaluating all the sensor data of an autonomous vehicle requires a fair lump of compute power, and it simply does not make sense to turn every car into a small roaming compute center. Ultimately, energy considerations also play a role in ruling out on-vehicle data processing, because the car of the future will be electric.

Consequently, it will be necessary to provide cloud-like, off-vehicle data processing that is close enough to the vehicle to minimize latency – an ideal scenario for Edge computing. This solution could entail dividing a country into regions and then rolling out local islands for compute and storage. Ultimately, every car manufacturer will have to do their own thing, but similarities will undoubtedly exist between the designs.

The challenge will be to provide these small ad hoc mini-data centers wherever they are needed. Big cities would probably have to be divided into several areas – but, especially in city centers, comprehensive IT infrastructure is rarely available. Much of this infrastructure will need to be built from scratch or developed through complex sharing arrangements with existing companies. In the long run, the Edge will require lots of well-ventilated rooms scattered around the world, each with a server cabinet and a capable (possibly redundant) power supply.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More