Intel did most of what they needed to do with the Xeon Scalable launch. There’s enough of a speed boost to get noticed, some interesting new options for server builds, and some cool low-level features that are going to matter in HPC and ML. This may be the biggest datacenter platform in a decade for Intel but it’s not a massive advancement overall.
ASICs are a complicated technology that are very different than CPUs. They are also the foundation of many tech devices we use each day. But sometimes figuring them out is as simple as solving a Rubik’s Cube.
ASUS just launched the first sub-$100 NBASE-T adapter using Aquantia silicon. This adapter supports 100 Mbps “Fast Ethernet”, Gigabit Ethernet, 2.5 and 5 Gbps NBASE-T and regular 10GBASE-T. It will scale performance based on the port on the other end of the wire as well as the quality of that wire.
Is Kubernetes simply benefiting from the first mover advantage, or does it have the force to stay the dominant container orchestrator in the enterprise for years to come? The roundtable discusses.
Intel’s been having a tough go of it lately with some of their silicon. First their Atom SoCs were causing some Cisco gear to brick back in February. Now comes this news of issues with HyperThreading on Skylake and Kaby Lake CPUs. This seems limited to a relatively specific workloads, but has a wide range of effected processors. Most desktop CPUs in the last couple years, and recent Xeon E3s are subject to the error.
AMD Epyc sounds pretty epic, with epoch-defining memory, I/O, and even cores of a dual-socket server in a single socket. And that’s something to get excited about, especially considering that the Zen cores inside these chips are almost at IPC parity with Intel’s latest, and can handle dual threads like Intel, too.
I’ve made no bones about my skepticism about Windows 10 S. It seems to fall into the uncanny valley between a locked down mobile OS versus the full power and vulnerability of regular old Windows. But Microsoft thinks the benefits to performance and security outweigh the loss of its enormous legacy software ecosystem.
AMD finally released it’s initial batch of server CPU’s, under the regretful name EPYC. As promised in their announcement, the chips truly offer some interesting capabilities. No matter which EPYC 7000-series chip you buy, you get some impressive features standard: 8-channel DDR4 memory support (up to 2TB supported), 64MB of L3 cache, and 128 lanes of sweet PCIe 3.0.
In response to a reader question on his look at Liqid’s composable infrastructure, Russ White frames an interesting question: is it easier to extend PCIe to support switching, and longer runs, or is it easier to design an entire protocol to (effectively) run PCIe over Ethernet? Liqid developed their solution based on former, but other composable infrastructure projects prefer an Ethernet based approach. It’s an interesting look into the benefits and drawbacks of both.