Architecting an Edge Cloud

In today’s networked world, data centers run by cloud vendors are interconnected and data streams into and out of hosted applications over the network to the end users. A large provider of content might lease capacity in a data center or operate its own. A global operation might have one or two per continent. These large data centers sit at the center of the cloud, which means they’re multiple hops removed from the end user.

There’s a burgeoning category of applications and content, including streaming video to mobile devices, latency-sensitive IoT apps, and security filters that needs to be pushed deeper into the network—closer to the edge and closer to the user. This shift to the edge is necessitated by a number of factors, latency being the biggest, but cost is also a part of the equation.

Still, cloud resources are essential, for cost-effectively analyzing large data volumes over time that can feed models for implementation on devices operating at the edge—whether in a plane, controlling autonomous vehicles, re-calibrating factory machines, or deployed throughout a so-called smart city.