Russ White considers the challenges of using GPU clusters in high performance computing. Aside from possibly lacking software to take advantage of it, the other challenge lies in the connection network. Ethernet is the default standard for this, but requires additional overhead. Russ sees using PCI Express as a much more efficient solution. He considers a PCIe switch from Liqid to dynamically compose infrastructure.
Do you mine cryptocurrency? Do you have eight AMD and Nvidia cards? Want to have them all plugged into a single motherboard with PCIe slots to spare? Asus just release the motherboard of your dreams. Behold the grotesque nightmare of expansion that is the Asus B250 Mining Expert!
AMD’s X399 is an unsubtle chipset for its unsubtle Threadripper CPUs. Luckily, AMD’s OEM partners are up to the challenge to design motherboards up to this standard. The initial batch are extremely high end, loaded with every feature imaginable. The one thing they all lack? Tasteful design. We’ve ranked the X399 launch motherboards by sheet tackiness, so you don’t have to.
AMD finally released it’s initial batch of server CPU’s, under the regretful name EPYC. As promised in their announcement, the chips truly offer some interesting capabilities. No matter which EPYC 7000-series chip you buy, you get some impressive features standard: 8-channel DDR4 memory support (up to 2TB supported), 64MB of L3 cache, and 128 lanes of sweet PCIe 3.0.
In response to a reader question on his look at Liqid’s composable infrastructure, Russ White frames an interesting question: is it easier to extend PCIe to support switching, and longer runs, or is it easier to design an entire protocol to (effectively) run PCIe over Ethernet? Liqid developed their solution based on former, but other composable infrastructure projects prefer an Ethernet based approach. It’s an interesting look into the benefits and drawbacks of both.
ARM-based servers in the data center are a lot like free beer, it always seems like you have to wait until tomorrow. Yet, unlike that mythical pint of the latter, we might be getting closer to the day when the former is a common reality. The first of many steps to make that happen is hardware. We’ve seen a few vendors making serious strides in the space. At the end of 2016, Qualcomm showed off their Centriq 2400-series SoC, with 48 cores on a single socket server. Now AppliedMicro is ready to sample their X-Gene 3 ARM server SOC.
At the Open Compute Summit, AMD went into some more details about it’s high end server CPU, codenamed “Naples”. At one time, the company’s Opteron processors were used in supercomputers. While never the dominant force in the data center, AMD had carved out a niche. The last decade has proven more problematic in the enterprise. AMD thinks Naples is not only competitive with the best from Intel, but will serve as a bulwark against what they describe as the problem of server “incrementalism”.
While getting some hands-on time with Iomega’s new 12-drive storage array, I spotted an exciting but unannounced feature: The ix12-300r includes native Avamar backup client! It also includes two PCI Express slots, bringing up intriguing possibilities for future expansion.