Every single tech development these days is either mercilessly hyped or unfairly ignored, it seems. Edge computing is surely a tech development, so we should be able to quickly determine which of these two conditions applies, and let everyone go back to their coffee, right? No, because the truth is yet another possibility, which is that both conditions apply. We’re hyping the edge while ignoring its reality.
When we started to talk about edge computing, the definition was straightforward. There are applications that demand a low latency, meaning a very small interval of time between an event that needs to be processed and the result of that processing. So small, in fact, that we needed special network services (remember low latency was a 5G claim), and yet so small that even those network improvements weren’t enough. We needed to move computing closer to the user, meaning closer to the point where the events were generated and the results were delivered. The edge. The edge was the new cloud, the driver of new network services. Most recently, it became a requirement for AI. The gift that kept on giving, no matter where you were in tech.
Not only that, the edge is real. You can identify whole verticals, like manufacturing, warehousing, transportation, utilities, telecommunications, and even government, where edge applications are already in place. We have companies and industries that are utterly dependent on their edge computing. So many, in fact, that by this point you’re probably wondering why I say that edge computing is a hype wave, or whether the hype, in the case of the edge, was justified. Well, one good reason is that we’ve had this all for decades. If the edge is real, it’s old. How did we make it new again?
Remember distributed computing? Minicomputers and personal computers? For decades, we’ve built computers that were designed to be distributed, meaning put in places outside the data center. I’m typing on one now, and you’re reading this on another. Smartphones and smart watches are forms of distributed computing. So are the industrial controllers that have been running manufacturing and other applications for decades. When you go shopping at a big store, you’re using a distributed computing application when you check out, and you may be getting your shopping money from an ATM via distributed computing, too.
Distributed computing was based on a simple truth, which was that if you have a concentration of activity that depends on computer access, having the computer that supports the activity locked in a distant data center invites major disruptions. If you have a critical piece of gear in your home, you don’t want to run a couple hundred feet of extension cord to plug it into a neighbor’s outlet. And yes, this whole distributed thing went off the rails as departments started buying their systems to get around central IT development and deployment delays. And yes, cloud computing got its start with “server consolidation” to bring some of these distributed systems back under central control. And all of that is how we got the solid, sensible, notion of the edge off into the land of hype.
It’s simple logic. Edge computing is a form of distributed computing, and distributed computing was an early driver of the cloud. Thus, edge computing must be a driver of the cloud, and just like those distributed servers, edge computers should be replaced by the cloud. The applications we’re currently running on premises, close to the activities they support, should be turned into cloud applications. Since these applications are highly latency-sensitive, that means we need to move cloud hosting points to the very edge of the network so we can get to them with minimal delay. The edge is the past, the present, the future.