Openness, it seems, is “in the air.” While the Liqid folks were over at OCP, Russ White was over at the Open Networking Summit (ONS), where he attended the Linux Foundation (Networking), or LF(N), board meeting. What was interesting at this year’s summit is just how little the focus of open network is on bare metal routers and switches, and how much of the focus is on server-based and overlay networking. The last mile, as it were, in the data center is becoming ever more important. This just shows the question hanging over the network has not changed much in the last twenty years: how much intelligence will be pushed into the network, and how much in the host?
What Are Liqid?
A Tech Talk conversation brought to you by Liqid.
In our latest Tech Talk series with Liqid, Russ White looks at why composable infrastructure is ideally suited to answer a prickly question in GPU computing: How many processors is this job going to require?
The recent Meltdown and Spectre attacks illustrate the problematic nature of modern computing systems. While the earlier Rowhammer attack could read or attack one process running in a virtual environment from another process running on the same processor, the Meltdown and Spectre attacks are of a completely different class, enabling a process to read large amounts of information from another process’ memory space. This is because inside each hosts, CPU makers have embraced a scale up approach. But what would happen with a more scale out approach to the architecture?
Russ White considers the challenges of using GPU clusters in high performance computing. Aside from possibly lacking software to take advantage of it, the other challenge lies in the connection network. Ethernet is the default standard for this, but requires additional overhead. Russ sees using PCI Express as a much more efficient solution. He considers a PCIe switch from Liqid to dynamically compose infrastructure.