Network programmability. It has been a topic of heavy discussion, speculation, forecasting, and desire for what seems to be forever.
In 2008, architects, engineers, and operators have been hearing the ever-present hum of programmability coming over the horizon. During these 12 years of innovation, the industry has seen its fair share of valiant efforts to enable deep programmability at the network stack’s very core layers.
OpenFlow brought us into this world with promises of a myriad of match criteria, enabling an external controller to program deep into the ASIC of a given piece of network hardware. While there are notable exceptions to this method of programmability, it has primarily evolved into a world where the end system interfaces, and some of today’s have taken this in a slightly different direction than the original intention of some of the seminal projects.
As networking landscapes have been made and remade for evolving needs, so too have the network elements and the end hosts’ requirements. Moving content closer and closer to the end station, be it in mobile networks, large content delivery, last-mile broadband, or the enterprise datacenter, has led to smarter networks. The more intelligent and more flexible the network can become, the better the experience is for the end-user.
Progression often leads to increased complexity, and data networking is no exception. With data networking and ethernet becoming a near-ubiquitous element in our everyday lives, the proliferation of connectivity creates a need for more complex protocol support, leading to a tremendous increase in protocols that a given network interface is required to parse. Add to that the explosion of devices, the requirements for overlays, underlays, filtering, encapsulation, and decapsulation. It becomes evident that ethernet interfaces in many devices will demand a much more robust feature set than in days past.
Given the history and lessons learned of custom silicon, protocols, and standards laid by the wayside, it would only make sense that the evolution of ethernet chipsets trend toward much deeper programmability. This transformation can be seen in the inclusion of toolkits such as the P4 programming language, a core toolkit for programmable packet processing devices such as the Intel® Tofino™ and derivative platforms.
Couple this exceptionally powerful toolkit with high-speed network requirements and add a bit of telemetry for highly granular network analytics. The resulting architecture is a highly flexible, hugely capable data delivery powerhouse with immense data analytics intelligence.
In classical networking, the ASIC of a given system – be it a compute and content delivery node or a network element such as a router or switch – the hardware ASIC is designed to meet the needs of the masses, thereby requiring it to support a myriad of different tasks at a basic or semi-advanced level.
In the space of a programmable pipeline, the ASIC is programmed to do precisely what it needs to and (usually) nothing more. What this opens up is a scope of immense data statistics. Because the hardware is programmed to be specialized, it can also present every bit of information about that special task, opening up deep visibility into a day in the life of a packet, for example.
Where can this be useful to an everyday engineer: Providing real-time alerting to an alerting platform, aiding in the diagnosis of network issues in complex architectures, classifying specific traffic within large volumes of network traffic, and more.
The promise of programmability visibility has long been a sought-after dream, which is now firmly in view.
Leave a Comment