As we delve into edge computing, one topic that has come up repeatedly is the applicability of Kubernetes as an orchestration platform. Originally created for large-scale cloud environments, Kubernetes has become popular from the datacenter to the edge. But is it really a good fit? This will certainly become a topic of discussion at Edge Field Day!
Kubernetes and Edge
Kubernetes is an open-source container orchestration system that has become popular in cloud and enterprise for managing and scaling containerized applications. Although intended for use in the cloud, Kubernetes can also be used to manage and deploy containers in other environments, including at the edge.
As previously discussed, edge computing refers to the processing and storage of data in a dispersed manner, closer to consumers and sensors. Edge compute can be beneficial for applications that require low latency or have data transmission or processing challenges. By deploying Kubernetes at the edge, organizations are attempting to move containerized applications closer to the source of the data while getting the management and orchestration benefits of the cloud.
Using Kubernetes at the edge has some of the same benefits as in the cloud or datacenter: The ability easily to deploy, scale, and manage containerized applications using a centralized centralized “control plane.” The scalability of Kubernetes is particularly appealing in edge environments due to the number of devices and the amount of data collected there. The “cattle” approach is also appealing, since most edge devices are deployed anonymously in un-supervised environments and often need to be remotely wiped and reconfigured.
Organizations deploying Kubernetes at the edge are typically seeking to integrate with Kubernetes environments in the cloud or datacenter. A wide range of applications can be deployed as Kubernetes-controlled containers, and this has become the standard in modern IT. This includes a wide range of edge-specific applications, such as edge gateways, IoT solutions, and edge analytics platforms.
Kubernetes can also be helpful in terms of security. Security is one of the key concerns when it comes to edge deployments, and Kubernetes can provide security features like role-based access control, network segmentation, and encryption. When used properly, these can help organizations secure their containerized applications at the edge.
Shortcomings of Kubernetes at the Edge
Kubernetes brings a lot of benefits for managing containerized applications, but it does have some limitations when it comes to edge environments.
Perhaps the biggest shortcoming of Kubernetes at the edge is its complexity. Kubernetes is complex to set up and manage properly and must be carefully tailored for automated operation in edge environments where human interaction is at a premium. Most edge Kubernetes implementations are designed to be fully “lights out, hands off” but this poses a great engineering challenge. I’ve recently discussed this with Edgegap and recommend reading Brian Chambers Medium post for more. But this is a huge concern.
Another issue for Kubernetes at the edge is network latency and outages. Kubernetes relies on a centralized control plane, and network latency or instability can interfere with proper operation. Considering that edge is all about low-latency processing, such as retail or IoT applications, this is a real worry. But many companies (including Edge Field Day presenter Mako Networks) are developing redundant networking capability to help make sure these applications stay connected.
Limited resources can pose another challenge for Kubernetes in edge environments. Most edge servers have limited system resources (memory, storage, and CPU power) and even “light” Kubernetes can be taxing. I deployed Rancher on a fleet of Atom NUCs in my lab and k3s was taking up more CPU, RAM, and storage than my test applications. So Kubernetes may not be the best choice for very light systems. Luckily hardware advances all the time and we’re already seeing a proliferation of RAM and CPU in devices like the Intel NUC.
Lastly, Kubernetes (and general-purpose Linux) support for arm architectures and other specialized hardware isn’t quite where we’d like it. This can be a problem for edge deployments that use specialized hardware, especially locked-down special-purpose appliances like routers or storage systems. This is also improving rapidly, however, and many of these devices are bringing better APIs and native Kubernetes support.
It is worth noting that Kubernetes is being adapted to work in edge environments, with solutions like edge clusters, Rancher, and k3s (a lightweight k8s distribution for edge computing). These are designed to overcome some of the limitations of Kubernetes at the edge and make it more compact, easier to deploy, and easier to manage.
Kubernetes is a powerful container orchestration system that can be used to manage and deploy containerized applications at the edge. By using Kubernetes at the edge, organizations can easily scale and manage their applications, integrate with other technologies, and secure their applications. But complexity, network latency, limited resources, and support for edge devices are all concerns. As edge computing continues to grow in popularity and support for Kubernetes expands, I expect that it will become a key tool for managing and deploying containerized applications at the edge. Watch for Edge Field Day on February 22 and 23, 2023 as we dive into all these questions!
Leave a Comment