The edge isn’t the same thing to everyone: Some talk about equipment for use outside the datacenter, while others talk about equipment that lives in someone else’s location. The difference between this far edge and near edge is the topic of Utilizing Edge, with Andrew Green and Alastair Cooke, Research Analysts at Gigaom, and Stephen Foskett. Andrew is drawing a line at 20 ms roundtrip, the point at which a user feels that a resource is remote rather than local. From the perspective of an application or service, this limit requires a different approach to delivery. One approach is to distribute points of presence around the world closer to users, including compute and storage, not just caching. This would entail deploying hundreds of points of presence around the world, and perhaps even more. Technologies like Kubernetes, serverless, and function-as-a-service are being used today, and these are being deployed even beyond service provider locations.
Exploring the Distinction Between Near Edge and Far Edge Computing
In the latest episode of the Utilizing Edge podcast from Gestalt IT, Stephen Foskett and Alistair Cooke delve into the world of edge computing, specifically focusing on the differentiation between “near edge” and “far edge” solutions. Joined by Andrew Green, a research analyst specializing in edge computing, the podcast examines various infrastructure options for both types of edge computing and highlights the significance of low latency in delivering content and services closer to end users.
Andrew Green sheds light on his work in defining and studying edge computing, with a particular emphasis on latency. He distinguishes near edge as locations that offer a round-trip latency of 20 ms or less, while far edge targets extremely low latency in the microseconds range. The primary objective is to provide content and services in proximity to end users to minimize latency, thus enhancing user experience.
Delivering services closer to end users is crucial for minimizing latency, as simply increasing network speed cannot overcome the limitations of signal propagation delay. Higher latency can negatively impact real-time interactive applications such as gaming. The concept of a unified edge fabric enables service providers to deploy points of presence (PoPs) globally, reducing latency and improving interactivity for users.
Overcoming the challenge of bridging the 20 ms latency threshold has been tackled through the distribution of PoPs globally. This distribution allows users to access services with lower latency regardless of their location, thus enhancing the overall user experience.
Overcoming the challenge of bridging the 20 ms latency threshold has been tackled through the distribution of PoPs globally. This distribution allows users to access services with lower latency regardless of their location, thus enhancing the overall user experience.
The choice of technology at the edge depends on specific use cases. This can involve leveraging existing cloud providers’ offerings or building custom infrastructure for distributed edge locations. Large-scale implementations, such as Kubernetes, are well-suited for running distributed edge applications and accelerating their delivery to end users.
The deployment of edge technologies varies from globally distributed service provider locations to more localized deployments at retail, industrial, or user-level locations. Each deployment state offers distinct benefits, but the feasibility of deploying edge infrastructure extensively is still a matter of discussion and practicality.
The deployment of edge computing is heavily influenced by economic and financial considerations. Edge infrastructure can be placed anywhere along the network, from home routers to last-mile providers. The decision depends on the specific use case and cost efficiency.
Real-world use cases for distributed networks include gaming, media delivery, and video on demand, where minimizing latency and data transfer costs are essential. For applications like video surveillance, processing data locally at the edge is preferable over streaming it to a central location for analysis.
Deploying compute and storage at the edge enables various real-world use cases, such as local video analytics and fraud detection. The advantages include low latency, cost of data transfer, and scalability requirements. With edge computing exhibiting cloud-like behavior, on-demand consumption and scalability are optimized through concepts like cloudbursting. Integrating multiple environments and orchestrating across edge and cloud resources is crucial for seamless connectivity and efficient management.
The concept of an abstracted edge platform is intriguing, spanning from the near edge to the far edge and integrating local processing with cloud services. Intelligent routers with virtualization capabilities are evolving into general-purpose tools, enabling the execution of multiple applications at the edge.
Local processing acts as a data filter, down-sampling and sending only the relevant data for further processing at higher levels. As data generation and processing move closer to the edge, the need for extensive data storage is reduced, with a focus on identifying and retaining valuable data while discarding the rest. Threat detection using AI and machine learning becomes crucial in this context.
This episode of Utilizing Edge offers valuable insights into the world of edge computing, emphasizing the distinction between near edge and far edge solutions. The discussion highlights the significance of low latency in delivering content and services closer to end users. With ongoing advancements and deployments, edge computing continues to evolve, unlocking new possibilities and use cases across various industries.
- Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett.
- Alastair Cooke, independent analyst and consultant working with virtualization and data center technologies. Connect with Alastair on LinkedIn and Twitter. Read his articles on his website.
- Andrew Green, research analyst at GigaOm. Check out his work on the GigaOm website and connect with him on LinkedIn.
For your weekly dose of Utilizing Edge, subscribe to our podcast on your favorite podcast app through Anchor FM and check out more Utilizing Tech podcast episodes on the dedicated website, https://utilizingtech.com/.