‘Building a network is easy; running a network is hard’. That statement embodies the eternal and quintessential oversight of IT networking. It transcends discipline, whether it is service provider scale, datacenter, enterprise, campus, mobile, or industrial.
Modern Networks
The assertion of ‘Building a network is easy; running a network is hard’ rings true as far as the IT networking eyes can see. This statement becomes exponentially more accurate in the application-driven networks of today.
In days gone by, applications were considered a tenant on a pathway, a car on the road. There was little care given to application performance by those running a network. Application performance was not unlike a diverse set of vehicles on the roadway; Vehicles can run slowly or quickly; they can carry a large payload or one occupant. Like applications, they run as they could along their given path, and on a good day, those vehicles get to their destination fully intact and without issue.
As applications become critical to business operation, and as more and more workloads move to cloud hosting, delivery in best-effort scenarios becomes riskier. There may be applications, even internal to a given organization, that have special or unique requirements that necessitate something more than ‘best-effort’.
Complications
In the world of service provider backbones, some mechanisms can ensure SLAs, bandwidth guarantees, and mutually agreed upon resource allocation across a slice of a carrier backbone. This is typically accomplished with fairly detailed QoS (quality of service) configurations in conjunction with traffic engineering techniques inherent in MPLS or segment routing technologies. While these are tried-and-true, they’re complicated.
In most cases, these are deployed at the service provider level for all traffic of a given class, from a specific customer, or otherwise within a bucket of traffic types. While this model is the de facto standard for carriers and ISPs, it starts to break down when thinking about specific applications.
Breakdown
This breakdown point is where technologies, such as intelligent queuing performed at the network controller level, pick up the torch. Utilizing technologies such as ADQ (application device queues), available in cards like the Intel 800 series, within a host network controller give a significantly deeper control level to developers and systems administrators. These technologies cater to companies needing to guarantee that high priority applications receive the resources they need to perform at their peak from the host stack to the network wire.
By filtering application traffic into queues directly in the NIC, it becomes possible to control application egress traffic at a much deeper level. This allows for more consistent delivery. Predictability, in turn, fosters a network that is significantly easier to run. This is because we are potentially pushing traffic into a wide area network with a set of SLAs, and it is also possible to create a mirror – or even more detailed implementation – of the SLAs of the WAN.
Predictability
Why is this important?
Let’s revisit the analogy of cars on a road. If each vehicle has a guaranteed speed of 65mps / 105kph, but it has a variable on-ramp speed, the barrier of entry to the expressway is a possible point of latency, congestion, or buffering.
Getting to a point where enforcement of desired performance parameters can now become end to end allows for significantly more predictability. Predictability informs lower overhead for network management and a more foreseeable and consistent performance. This grants a far superior user quality of experience, not to mention a more predictable environment, making for a better night’s sleep for the engineers running it.