Intel announced their 3rd generation Xeon Scalable processor line today, code name “Ice Lake,” and it’s much more than a chip. Ice Lake is a long-awaited salvation for Intel’s data center prospects, and it ought to be enough to hold off AMD’s EPYC long enough for the true next-generation server platform to emerge in a few years.
We’ve spent the last few weeks reacting to AMD’s third-generation EPYC server CPU launch (code name Milan) and speculating about Intel’s response, code-named “Ice Lake.” This morning, Intel announced their third-generation Xeon Scalable platform, and it’s pretty much what we expected. Intel’s new Xeon Scalable processors get PCIe 4.0, more memory channels, and an efficiency bump to make them competitive with AMD’s offerings. But more important are the supporting products, notably the Optane Persistent Memory 200 series and Ethernet 800 adapter, and ecosystem of software technologies.
Diving Into Ice Lake
Intel was embarrassingly late with Ice Lake Xeon, but while AMD simply revved EPYC a bit for “Milan”, Intel brought important new capabilities to their server CPU range. Most important is PCI Express 4.0, which was an important differentiator for AMD’s server products. Not only did Intel move from 3.0 to 4.0 in Ice Lake, but they also added more PCIe channels per socket. Cascade Lake offered 48 lanes of PCIe 3.0, but Ice Lake has 64 lenses of PCIe 4.0, which is more than twice the bandwidth! This finally puts Xeon on the same playing field in terms of I/O as AMD’s previous-generation EPYC processors, though they boast 128 lanes per socket.
Then there’s the memory situation. Intel moved to 6-channel memory with Skylake-SP and stuck to it with the Cascade Lake (1 and 2 socket) and Cooper Lake (4 and 8 socket) processors. But Ice Lake gets 8-channel memory, increasing bandwidth dramatically. Memory speed is also bumped up, from 2933 to 3200, and this isn’t affected even when an Optane Persistent Memory 200 DIMM is used, a topic we’ll discuss later. Intel claims that memory latency is much better than AMD EPYC too, especially when reaching across sockets in dual-socket systems. Overall, this gives Ice Lake a nice advantage over Milan.
As expected, Intel is also offering more cores per socket with Ice Lake. Where AMD Milan tops out at 64 cores, Intel’s previous-generation Cascade Lake Xeon Scalable processor could only offer 28 cores per socket. Now Ice Lake ranges up to 40 cores per socket. Even though this can’t match Milan, most customers will be buying lower-core-count processors from AMD or Intel because they offer more value for the money. The “sweet spot” for Milan CPUs, in terms of dollars per core, is the 24 or 28 core parts, with anything over 32 cores costing more than twice as much. Although I haven’t had a chance to look through Intel’s list prices in detail, and most hyperscalers and OEMs will have their own pricing, it’s safe to say that very few people are buying 64-core monster chips.
It’s also interesting to consider the low end of the range for a moment. AMD is still selling the previous-generation EPYC at the low end, with Milan starting at 16 cores and an MSRP over $1,000 for the EPYC 7313. Intel also moved away from the low end with Ice Lake, though they do offer an 8-core part. Low-end servers with 8 or 12 cores are pretty common, and Intel can now compete there with PCIe 4.0 and 8-channel memory, coming up against AMD’s Rome EPYC line instead of their latest Milan offerings.
Intel is also pretty proud of some enhancements they’ve made to the Sunny Cove cores they’re shipping in Ice Lake, and this gives them some room to stand out against AMD. Notable are AVX-512 instructions that accelerate in-memory databases and compression, public key cryptography operations, and vector math. And Intel’s deep learning (DL Boost) instructions are way ahead of AMD in AI processing, though many buyers might opt for an accelerator card for these operations. Combined with optimized cores and caching, this ought to cut or eliminate AMD’s per-core IPC lead in many cases.
Ice Lake Is A Platform Not A Chip
But the Intel Ice Lake story isn’t all about the CPUs. We’ve seen new storage and networking product introductions over the last year, many of which seemed to be waiting for Ice Lake. Intel’s Ethernet 800 line needs PCIe 4.0 to reach its potential, and that didn’t match up with Cooper Lake. The same is true of the monster P5800X PCIe 4.0 NVMe SSD we saw in December. And the Optane Persistent Memory 200 series, also announced in December, couldn’t reach its potential in Cooper Lake. All of these seemed like accessories waiting for a platform until Ice Lake was announced, and all of them make an Ice Lake server much better than the competition. Intel is showing full platform benchmarks that combine the E810, PMem 200, and P5800X in world-beating real-world applications.
The real story is simple: Intel has launched a server CPU that’s competitive with most of AMD’s latest EPYC offerings and paired it with additional components they can’t match. Ice Lake isn’t about a CPU, it’s about a platform that finally brings Intel into competition with AMD in the vast middle of the market, and this is exactly what the company needed to do. I expect that Ice Lake will be enough to keep Intel competitive in the datacenter and the cloud while both companies work on their next-generation server platforms. And those offerings, completely re-engineered around technologies like PCIe 5 and CXL, should really be something different!
We’ll hear more from Intel on Ice Lake at our special Data Center Update with Tech Field Day on April 6 and 7. This includes deep-dive Tech Field Day sessions on AI, Xeon architecture, security, and more! Video will be posted to the Tech Field Day YouTube Channel.