I’m a networking person. I didn’t start out that way but networking eventually made sense to me. It was orderly. It was easy to understand. When I thought about how packets traveled, the procedural part of it all appealed to me more than drivers or disk sectors or spectrum analysis. But the world is not static. I have found myself needing to pick up new skills over the years. Novell servers transitioned to virtual servers and now into the cloud with containers. It’s hard to wrap your head around where technology is going sometimes. But if you can find something that helps you anchor what you’re doing it can all suddenly make sense.
A Mesh-y Definition
Service Mesh is a term I hear a lot when people start talking about Kubernetes pods and cloud computing. No matter what the problem might be, the answer is a service mesh. There are a ton of solutions out there, both commercial and open source. However, for all their solutions I could never figure out what exactly a service mesh was. At least, I couldn’t until NGINX presented during Networking Field Day 21 this past October. Here’s a wonderful video from Faisal Memon that does what I haven’t been able to figure out before: describe a service mesh.
It’s twelve minutes that have made more sense than anything else. Why? Because Faisal frames it like something I’ve already heard. Service Mesh isn’t a magical cloak that makes Kubernetes work. It’s not fairy dust we sprinkle in the cloud. It’s a routing fabric. Plain and simple.
Now we’re talking! Routing fabrics make sense. Things need to go certain places. Packets need to be delivered and we need to know how to get there. Only now with a service mesh we’re not routing pieces to a numerical destination. Instead, we are sending application traffic to a service.
The idea of service routing is a breakthrough from this perspective. Instead of thinking through how traffic needs to flow through a network physically, we can instead think about things logically. We don’t need to remember where all the load balancers live and what their IP or DNS information is. Instead, we can just tell the service mesh to send traffic through systems tagged as “loadbalancer” first and then go from there. When we need more capacity to handle things, we just spin up more pods with those types of tags.
This sounds not unlike the SPB service routing that I talked about a while back, but it’s not focused on the network. Instead, this is connecting container pods together to let them communicate what they need and how things should work. It’s a network paradigm but abstracted from the network itself. It’s building a network between disconnected things that need to communicate.
But it’s more than just routing messages and traffic. It’s about connecting things and opening them up for more services. As outlined, if you build a sidecar proxy to the container or pod, you can do more with it than just services stuff. You can pull analytics and monitoring data. You can create systems to collect that data and forward it on to other monitoring systems. That’s a huge win because you need that kind of data to figure out how your systems need to be supported. As above, if you suddenly get a huge traffic spike you need to spin up load balancers. But how will you know unless you can monitor traffic data or memory utilization? Knowing something is happening is just as important as it occurring. In this case, you need someone around to hear the tree falling in the forest.
The last service you get is security. Specifically, you can encrypt all the communications in the service mesh with TLS encryption. That means no eavesdropping on application traffic. If you think about the current landscape of threats and how bad it might be to have something on the inside listening to how your applications communicate and stealing that data you can understand why TLS is so important. But it’s not about reinventing the wheel and coming up with your own solution. If a service mesh can do this for you without any extra effort, doesn’t it make more sense to use it?
Bringing It All Together
I know I’m really only scratching the surface with service mesh, but NGINX did, too. They’re on the same journey of exploration that I am. They need to understand the value and why it’s important to bring this to their customers. Maybe their customers don’t quite get it yet. Or they don’t understand how something like this can be better for them with top-to-bottom integration and support as opposed to trying to roll their own. The key is that they understand where they are coming from and how this can all work together better to make sense for them. NGINX did exactly what I needed them to do: they framed the issue in terms of networking to help me understand containers. You might even say the concept meshed well with me.
For more information about NGINX and their offerings, including their service mesh implementation, check out http://nginx.com