- Open Networking: Something To Prove
- The Many Definitions Of Open Networking
- ONUG Day 1 Liveblog – Jordan Martin
- ONUG Day 1 Liveblog – Drew Conry-Murray
- ONUG Day 2 Liveblog – Jordan Martin
- ONUG Day 2 Liveblog – Drew Conry-Murray
- Three Significant Challenges To Full Stack Network Orchestration
- SD-WANs And The Republican Primary
Recently I had the privilege of attending the Open Networking User Group (ONUG) in NYC, where for two packed days we talked about what the world of next-generation networking is going to look like. Topics that were covered included the many varied flavors of SDN, open source networking, NFV/NSV and even branched out into topics of cloud systems architectures and coordinated orchestration between all of the above. What became clear to me throughout the event is that while evolution in our network frameworks is certainly necessary, it isn’t going to come without some fairly significant challenges.
Challenge 1: Orchestration requires data, a lot of it. As we move towards highly orchestrated architectures there is going to be an underlying requirement of comprehensive visibility between different components of the stack. If the architectures that I work on as an integrator are a fair representation of the community as a whole, then network visibility, telemetry and logging are not components that receive a lot of attention in current designs. Sure we have monitoring platforms to know when a device is online or offline, and environments that are proactive have external centralized logging platforms to receive syslog/traps from devices. What we don’t do a good job of is correlating data for disparate systems for the purpose of holistic visibility into issues. This detailed level of visibility is a requirement for full stack orchestration as telemetry and log data are going to be the drivers for action at different levels of the stack. As an engineering community we are gong to have to learn how to manage the large amounts of data required for successful orchestration.
Challenge 2: The stack is becoming loosely coupled, well kind of… Application architecture has been leading the way to distributed platforms. Distribution and segmentation of applications is great as it minimizes failure domains and facilitates dynamic horizontal scaling of applications. In response, systems architectures have followed from bare metal, to virtualized, to cloud orchestrated, and now moving towards containerized deployments. The network environment is lagging but moving in the direction of controller based networks with distributed forwarding planes and policy driven actions.
In each of the layers of the stack, we are migrating to a model of loose coupling (or lower interdependence of disparate devices). The challenge that I see is that while the different layers of the stack are becoming more distributed and loosely coupled, the interactions between the layers are becoming more tightly coupled than ever before. Distributed applications rely heavily on the orchestration of the systems layer below it. The systems layer is increasingly dependent on coordination with the network layer below it. We are building a highly complex infrastructure, with tightly coupled interactions, which leaves our networks at risk of incredibly dramatic failures. As network engineers we need to understand and carefully manage the interactions between the layers of our technology stack to reduce the potential impact of a failure or misconfiguration.
Challenge 3: Standard architectures are safe, variations are dangerous. One of my favorite phrases in network architecture and design is “Best Practice”. The reason why I like it so much is not because of it’s intended meaning, but rather the fact that if you asked 5 different architects what a best practice is, you are likely to receive 5 different answers. As an industry we have done a fairly poor job of providing standard reference architectures and following those universally. Hardware vendors provide reference architectures but many times these appear to have the goal of selling more equipment than actually providing a standard deployment framework for engineers and architects to follow.
As an integrator who visits many different networks as a component of my job, I can safely assert that while we all use the same limited number of available technologies, we use them in drastically different ways. As we move into more highly orchestrated networks this is something that is going to need to change. The software being written to coordinate actions between disparate devices is only going to be able to be validated against predictable designs. Variations and “out of the box thinking” as it comes to architecture decisions are going to have to become a thing of the past. Designing outside of predictable architectures is going to introduce greater risk in an orchestrated environment than our current generation of deployments due to the tightly coupled nature of those interactions.
It shouldn’t be surprising that an evolution of network design is going to come with certain challenges and difficulties. As the architects of these next generation networks it is our job to identify these potential pitfalls and work to remediate them as much as we can. There is no single design architecture that is going to be a panacea, but that doesn’t mean that each model won’t have its place. Whether it is the established model of networking, or a highly orchestrated architecture, we need to understand where it fits, what benefits it brings, and what risks need to be acknowledged and mitigated during their design.