The future of Ethernet is getting faster every day. Andy Bechtolsheim of Arista knows that as well as anyone. That’s why he takes a look at where 400Gbps Ethernet is today and how soon it will be arriving. Tom Hollingsworth looks at his presentation and discusses why it’s a very telling look at the future of networking.
In response to a reader question on his look at Liqid’s composable infrastructure, Russ White frames an interesting question: is it easier to extend PCIe to support switching, and longer runs, or is it easier to design an entire protocol to (effectively) run PCIe over Ethernet? Liqid developed their solution based on former, but other composable infrastructure projects prefer an Ethernet based approach. It’s an interesting look into the benefits and drawbacks of both.
In the enterprise, it’s been interesting to follow the debates between 10/40/100GbE and the alternative 25/50/100GbE roadmaps. As the data center demands more bandwidth, we’ll see this debate shake out in practice. But those kinds of speeds are completely irrelevant to the needs of consumers in any kind of foreseeable future. We’re just really starting to see use of the now ubiquitous 1GbE that’s standard on most devices. That’s what made the announcement that Aquantia is launching a consumer focused line of chips with NBASE-T support intriguing.
Last year I went through my own Mac migration. My wife’s ancient 2006 MacBook in lovely white polycarbonate had a good long life, but was just about becoming unusable. With a maxed out 2GB of RAM and a Core Duo (not a typo) processor, I was actually impressed how long it was relatively functional. This was […]
DriveScale wants to change how storage is considered in your datacenter. Think about how storage is added to a typical setup. If you need more storage on-prem, you throw a couple of pizza boxes on the rack, adding storage, but also compute, memory, connectivity. That’s great, if you just happen to need your storage to scale according to your vendor’s specification.
Surprise! No amount of networking technology will make Layer-2 networks be the correct choice for everything they’re being pitched for.
It’s amazing to watch folks come to understand and appreciate new technology. Even more so when it’s Ivan Pepelnjak learning a new networking technology! Here’s his take on Brocade VCS fabric.
Tony Bourke really stirred up a hornets’ nest with this one! Who would have thought that “storage NAT” would be so controversial? He followed up with a second post on analogies for NPV/NPIV…
In the server space, one of the biggest shifts was the form factor of the servers: From tower to rack-mount to blades. But what makes a blade server anyway? Let’s consider this for a moment, as we watch another shift in progress.
What elements remain unresolved to make FCoE truly world-class? What should the vendors be prioritizing?