At the Open Compute Summit, AMD went into some more details about it’s high end server CPU, codenamed “Naples”. At one time, the company’s Opteron processors were used in supercomputers. While never the dominant force in the data center, AMD had carved out a niche. The last decade has proven more problematic in the enterprise. AMD thinks Naples is not only competitive with the best from Intel, but will serve as a bulwark against what they describe as the problem of server “incrementalism”.
Naples will use the company’s brand new Zen architecture. We’ve already seen it put up strong performance in professional consumer use cases. For the server market, AMD is pouring on the cores. At the high end, Naples will have 32 hyper threaded cores per CPU, meaning in a dual socket configuration, you’ll be looking at 128 concurrent threads. The other thing AMD is touting is their overall IO advantage over Intel. With 128 PCIE lanes to a 2-socket server, AMD is able to offer 60% more than Big Blue. Add to this sixteen memory channels and Naples should be able to sate a lot of demanding workloads.
AMD gave a lot of use cases where Naples’ IO advantage would really shine. Ultimately their sales and marketing team has a big job in front of them to really disrupt Intel in the data center. But one interesting application shows that AMD’s data center team is aligning all its pieces. Late in 2016, AMD announced a series of GPUs specifically geared toward deep learning, called Radeon Instinct. There’s no doubt they have ground to make up with Nvidia in this space, but combined with Naples, it might make for a fairly cost effective and competitive package. Because of the PCIE lanes available to each CPU, a single Naples processor can give full bandwidth to up to four Instinct GPUs with support for direct attachment of multi-GPUs. AMD’s marketing claims this has the computing power of one human brain in this configuration. This sounds like pure bluster, but would make for a powerful and compact package for those applications. I’ve always felt that AMD never fully capitalized on their acquisition of ATI over a decade ago. Sure they put out some low end Fusion APUs, and got some design wins with the last few generations of gaming consoles.
The other really interested development in the Naples release was announced at the Open Compute Project US Summit last week. AMD worked in collaboration with Microsoft as part of their Project Olympus initiative. This is Microsoft plans for next-gen open source hyperscale cloud hardware development. Naples was designed to integrate features from Project Olympus, what AMD is calling “cloud delivery” focused. While the announcement was light on specifics, let’s not understate how important Microsoft is to the public cloud. With their Azure offering a solid number two in the market, they have a very deep understand of the hardware needed for hyperscale deployments. The fact that AMD has worked to ensure Naples puts those features in the forefront would make the processor appealing to any cloud provider, presumably. It’s smart moves like this that make me optimistic for Naples adoption in the data center.
To a certain extent, AMD has a tough draw trying to move back into the enterprise and cloud computing. I’m sure for a lot of CTOs, there’s a certain mindset to just go with Intel, “no one ever got fired for buying IBM” and all that. But AMD is starting their efforts strong. Naples seems well poised to take advantage in the inevitable slow down of Intel’s “tick tock” processor roadmap. It’s well provisioned with plentiful cores and lots of IO. While I haven’t seen anything firm on pricing yet, if their Ryzen consumer CPUs are any indication, they’ll pressure Intel there too. And getting onboard at the ground floor with Microsoft’s Project Olympus might help them get in the huge volume of hyperscale data centers. Of course, that’s almost table stakes when you’re competing with Intel.