Although the topological concept of edge computing may be decades old, the limitations imposed by the centralized implementation of hyperscale cloud has thrust edge topology into the limelight. Edge computing places content, data and processing closer to the applications, things and users that consume and interact with them. It takes the classic IT workload quandary of “What goes where?” and encourages workload and capability placement that optimizes the balance of latency, bandwidth, autonomy and security across a continuum of options, from hyperscale cloud data centers to home thermostats.
Edge computing doesn’t compete with cloud computing, but it will complement and complete it. The description of a decentralized edge location implies a centralized alter ego (i.e., the “core”). This is a centralized data center, ranging from a hyperscale cloud provider with massive data centers, to individual enterprise data centers of all sizes.
The “where to store and process the data” see Figure below has swing between highly centralized approaches (such as farm servers or centralized cloud services) and more-decentralized approaches (such as PCs, mobile devices and people). Distributed deployment models are best for addressing connectivity and latency challenges, bandwidth constraints, and the greater processing power and storage embedded at the edge