I’ve been keen on rack-scale composable infrastructure for years. Decades even. But it’s only recently that we’ve had the technology to make it happen. You can now create a system or a rack that can flexibly allocate storage and compute using a shared I/O channel. But what if you could add more elements and “decompose” the server further? That’s what Liqid is promising with their latest announcements.
Openness, it seems, is “in the air.” While the Liqid folks were over at OCP, Russ White was over at the Open Networking Summit (ONS), where he attended the Linux Foundation (Networking), or LF(N), board meeting. What was interesting at this year’s summit is just how little the focus of open network is on bare metal routers and switches, and how much of the focus is on server-based and overlay networking. The last mile, as it were, in the data center is becoming ever more important. This just shows the question hanging over the network has not changed much in the last twenty years: how much intelligence will be pushed into the network, and how much in the host?
This week in Gestalt News:
– Stephen Foskett looks at Kasten brings enterprise-class data services into the cloud
– We talk to Packet CEO Zachary Smith for IT Origins
– Russ White looks at GPUs and Composable Computing
In our latest Tech Talk series with Liqid, Russ White looks at why composable infrastructure is ideally suited to answer a prickly question in GPU computing: How many processors is this job going to require?
In this edition of Gestalt News:
– Tom Hollingsworth takes a look at ExtraHop Reveal(x)
– DriveScale’s Tom Lyon sits down for an IT Origins interview
– Russ White considers if flexible scaleout in design could have mitigated the Meltdown vulnerability
The recent Meltdown and Spectre attacks illustrate the problematic nature of modern computing systems. While the earlier Rowhammer attack could read or attack one process running in a virtual environment from another process running on the same processor, the Meltdown and Spectre attacks are of a completely different class, enabling a process to read large amounts of information from another process’ memory space. This is because inside each hosts, CPU makers have embraced a scale up approach. But what would happen with a more scale out approach to the architecture?
Can the Gen-Z Consortium makes blade servers the future of the data center?
Russ White considers the challenges of using GPU clusters in high performance computing. Aside from possibly lacking software to take advantage of it, the other challenge lies in the connection network. Ethernet is the default standard for this, but requires additional overhead. Russ sees using PCI Express as a much more efficient solution. He considers a PCIe switch from Liqid to dynamically compose infrastructure.
In response to a reader question on his look at Liqid’s composable infrastructure, Russ White frames an interesting question: is it easier to extend PCIe to support switching, and longer runs, or is it easier to design an entire protocol to (effectively) run PCIe over Ethernet? Liqid developed their solution based on former, but other composable infrastructure projects prefer an Ethernet based approach. It’s an interesting look into the benefits and drawbacks of both.
The idea behind composable infrastructure is so cool, it seems like it has to be made up. The basic concept it to be able to dynamically use pooled resources to make servers that fit your current need, rather than make applications and use cases conform to fixed hardware. If I had to personify composable infrastructure, it would be a transformer that’s made up of grey goo nanobots.
Liqid’s composable infrastructure bridges the gap with this fantastic idea with PCIe Fabric and bare metal goodness. Sadly no nanobots.