I consider VMworld EMEA to be my home event and attend it almost every year. However this time, I decided to experience the US version. I found that it’s every bit as chaotic as people say it is. There’s a sea of people everywhere; rushing from one end to the other and unless you meet your friends at the blogger’s table, there’s no chance of seeing them. Huge amounts of walking is a given and I broke my one-day walking record many times over. Despite all the chaos, however, it’s also great fun.
Being a vExpert NSX meant that I also got to attend Future:NET 2018 for the first time. As Ed Horley mentioned in the first post of this series, industry peers from the field of networking come together in this conference, to discuss the future of networking. It was a day filled with knowledge and I loved every minute of it.
There were a couple of sessions that stood out for me and in this post, I’ll discuss the presentation by Marco Palladino, Co-Founder and CTO at Kong, an open-source microservices API Gateway. It was called “Evolve or Die: The API Management Journey”.
All Along the Microservices
In the past few years, it has become impossible to talk about software development and not hear the word “Microservices”. That is true even though only a handful of big companies like Google, Netflix, Amazon etc. have managed to transition to this new way of thinking. For the rest, it remains the treacherous journey that one gives up on before even starting.
Necessity is the mother of invention and the birth of microservices architecture is the prime example of it. Traditional monolithic applications are not designed for scalability. The one thing common between these trend-setting organisations is their requirement to infinitely scale their services as required. Microservices is the answer to that problem and for that reason, this architecture is here to stay.
Does that mean that everyone can and should refactor their applications? Absolutely not. Marco is aware of that reality. In his presentation, he talks about the history of application development and why this change was necessary. He acknowledges that while this might be the future of service architecture, the reality is that the change will be slow.
I enjoyed the presentation that much because I agree with what he said. There is no point in forcing this change if there’s no business justification for it. At the same time, it doesn’t mean that organizations shouldn’t evaluate their environments to determine if there is scope for improvement and act if there is. One has to start somewhere!
While organizations transition slowly into this new way of thinking, there will be a hybrid state. Most will have the different architectures running side-by-side on different platforms, communicating with each other where required via APIs.
APIs Revisited
These APIs have traditionally been managed and monitored through API management tools, which became available almost immediately after APIs themselves. Back in the day when monoliths ruled the earth, these tools were monoliths themselves. Today, they suffer from the same scalability and performance issues as the applications they monitor.
That is key because APIs are not just used for service access anymore. Internal functions like inter-service communication are also handled via APIs. Performance of that inter-service communication is absolutely critical in a microservices environment simply because there is a greater number of services that make up an application. Increased east-west communication between different services multiplies the effect of latency very quickly, which only gets worse as the application is further subdivided.
Changes in application development architecture mean that these tools also need to evolve to keep up with the requirements of this new design paradigm. They will need to be platform-agnostic and able to communicate across those various platforms in a consistent, reliable but most importantly, performant manner.
Tangled Up in Service Mesh
The “service mesh” concept aims to address the performance issue by introducing a “sidecar proxy” that runs alongside a microservice process. It does mean that a small footprint attaches itself to every microservice but in return, it handles proxy functions, observability, error handling, health-checks and other management functions.
This arrangement allows separation of duties: The microservice sheds excess baggage and just runs the application code, which the developers can handle. Everything else is operations’ business and they can manage that side of the application. In addition, as the name suggests, service mesh deployment model creates a mesh of connectivity. The aim is to reduce hops which results in reduced latency. Application resilience also improves as a result.
Today, the talk is mainly about microservices placed within containers and inter-service communications. Tomorrow, applications will move on to predominantly contain FaaS (Function-As-A-Service) components, resulting in true cloud-native capabilities and scalability but with increased complexity.
What do you see happening in your environment and when the time comes, will you be ready for it? As Bob Dylan once said (and I am paraphrasing): Service Architecture: The Times They Are a-Changin’