In a world of networking built on standards, the hottest new technology isn’t anywhere close to being based on them. Should SD-WAN not be deployed until it has been through the standardization process? Ethan Banks has some great thoughts about why this isn’t as big of a deal as you might think.
SD-WAN is a simple solution for a complex problem. But are network engineers contributing to the complexity? Ethan Banks takes a look at the issue of complexity in SD-WAN and how we can eliminate some of it by making smart decisions.
This is post 5 of 6 in the series “ONUG Spring 2015 Tech Talks” The subtext of the ONUG spring 2015 conference was operationalizing open networking. The idea might not sound like much, but it’s indicative of an added focus in the SDN industry. For the last five years or so, the main focus has […]
This is post 1 of 6 in the series “ONUG Spring 2015 Tech Talks” One of the fascinating things to me about the open networking movement is that it hasn’t happened sooner. The drivers have certainly been there for many years: vendor lock-in (at least in certain circumstances) as well as significant capex and opex […]
Some of you know I took on a new job earlier this year, where the challenge was (and is) to transform a globally distributed network for a growing company into an enterprise class operation. A major focus area has been eliminating single points of failure (SPOFs): single links, single routers, single firewalls, etc. If it can break and consequently interrupt traffic flow, part of my job is to design around the SPOF within the constraints of a finite budget.
I have been working on a project to migrate our remote office connectivity into a private WAN. Today, many of those sites are connected via a manual mesh of site-to-site IPSEC VPN tunnels. In the process of this conversion, I have been re-working the WAN cloud itself to leverage the vendorâ€™s ability to peer with me via BGP.
Caches can be guilty of storing bad data. When they first learned their data, they had learned truth. But as a cacheâ€™s data ages, the possibility increases that the cached data becomes stale: out of sync with reality. When cache gives you stale data, itâ€™s lying: a stiff penalty we sometimes pay for performance.
TRILL is proposed with no technical implementation details in RFC5556 and can be encapsulated thusly: Shove the logic of a layer 3 routing protocol down into layer 2. Why? So that switches can bridge traffic via the most efficient path while still avoiding topology loops.
â€œConvergenceâ€ is a buzzword seen in the IT press constantly these days. All convergence means is placing communications that used to ride on its own network onto one unified network; Ethernetâ€™s cheapness, ubiquity, and ever-growing link speeds makes it the network everything is moving towards. The first big convergence move was to combine voice networks with data networks, using IP telephony. The challenges of a converged voice/data network include prioritizing voice traffic over pretty much anything else during times of link congestion, and keeping call quality high by delivering datagrams in a predictable time with a predictable gap in between those datagrams.
An important element in beating back network chaos is a well-ordered spanning-tree. Spanning-tree was mostly ignored and/or disabled (!) by my predecessors. Much unloved, spanning-tree is one of those protocols that networking folks are prone to turn their backs on, looking at it from a distance with a jaundiced eye. â€If I leave it alone, it canâ€™t hurt me, â€ seems to be the mantra, right up there with, â€œDonâ€™t ask, donâ€™t tell,â€ and â€œLet sleeping dogs lie.â€