All NGINX Sponsored Tech Note

Getting Real with North-South Traffic

I’m back for some more Microservices March with NGINX! For anyone who missed the first week of this series, NGINX kicked things off with an introduction to architecting Kubernetes for high-traffic websites. This was a great way to get a basic overview of the importance of Ingress controllers and how they can be utilized to help when web traffic scales out of control. It also makes a lot of sense when tying the examples back to similar infrastructure problems. Even if you are newer to the cloud native Kubernetes world like me, you can certainly appreciate the need to have something that can appropriately direct traffic while dealing with changes in scale. 

I dove into week two with a new unit called Exposing APIs with Kubernetes. Quickly I realized that the simple example from week one was about to get a bit more complex, but in a good way!  Week two presents a new scenario where one NGINX Ingress Controller is placed in front of both an API service and a frontend service.  What happens if only one of these services starts getting hit with a lot of traffic? This is similar to the noisy neighbor issue that is so prevalent in traditional IT. I’m sure many of us have seen problems arise when one specific server (or VM, container, etc.) starts hogging all of the allocated resources. It can affect not only that one server, but all of the servers that may share infrastructure resources with it.

The same is true for microservices. Many of the same challenges apply, they just have different terms and affect different components in the stack than some admins may be familiar with. In the case above, the API service isn’t expected to get flooded with requests, but all of a sudden that starts to happen. It isn’t too hard to guess that the Ingress controller will perform poorly if not configured to deal with an unexpected spike. Since the frontend service also utilizes the same Ingress controller, it will also likely experience issues as the Ingress controller deals with unexpected stress.

Thinking about this logically from an architectural perspective, it makes sense that the answer is to give each service its own Ingress controller. In the new demo, the API service has its own Ingress controller deployed as an API gateway while the frontend service is served by a separate Ingress controller. Rather than revisiting the scale up scenario from week one, we looked into rate limiting. The API service isn’t expected to need autoscaling, so any large increase in requests would be outside of the expected performance for this service. 

We don’t want to overwhelm the system should it get more requests than it can reasonably handle and bring everything down, so we set a rate limit to state a max for what resources the service can use to reasonably serve the expected amount of traffic. In this case, if the API server hits that rate limit, it just stops accepting requests until back under the limit. The separation of the services onto their own specific Ingress controller also means that the frontend service isn’t affected by limits being hit on the API’s Ingress controller. Therefore, the frontend Ingress controller continues to work as expected because it isn’t sharing a noisy neighbor.

After going through this scenario, it was easy to see how this was the next logical step in the Kubernetes networking journey. The parallels to thinking about architecture for traditional IT are very apparent, but so is the fact that these are still fairly basic examples. Digging deeper into the blogs for API gateway use cases and choosing between an API gateway tool, Ingress controller and service mesh show that there is still a lot to learn about the myriad of ways for architecting modern applications. Microservices March week 2 continues to lay the groundwork for learning more throughout the rest of the month.

About the author

Adam Fisher

Cloud DevOps Engineer at RoundTower, 10+ years in the IT industry focused on all things data center, Blogger, vExpert, Hokie.

Leave a Comment