In 1945, John von Neumann laid out the foundation of modern computer architecture in the seminal paper, “Report on the EDVAC.” In classic von Neumann architecture, memory stores data and instructions for the central processing unit, while storage is located externally. We have been stuck with this distinction ever since, even as storage has developed from magnetic wire recording to spinning disks to NAND flash.
The Promise of Storage-Class Memory
In the 2010s, the storage industry was buzzing about the development of resistive RAM (ReRAM) and so-called memristor technology, but it was the announcement of 3D XPoint by Intel and Micron that solidified the movement. Here was a technology that was at once similar to RAM (bit addressable) and solid-state storage (non-volatile). Intel promised that 3D XPoint would be orders of magnitude faster than flash memory while also being far cheaper than RAM in terms of capacity.
All of this sounded great, but the reality has been more challenging. Intel’s initial 3D XPoint products (now branded Optane) were conventional-seeming SSDs with little performance or capacity benefit over flash. Although later generations of Optane SSDs have established the technology at the front of the pack in terms of performance, we have yet to see a 3D XPoint product that lives up to the initial promise. And this has also been blunted by a public divorce between Intel and Micron, as well as relentless advances on the NAND flash side of the equation.
In 2018, Intel released Optane DC DIMMs, finally delivering the promised NVDIMM-P form factor for 3D XPoint. Although these products (code-name “Apache Pass”) give modern servers vastly more RAM capacity, they remain limited in terms of availability and impact. Pressure from NAND-based competition shows that Optane DIMM still isn’t the game-changer Intel wanted.
The real challenge posed by 3D XPoint has always been architectural. Is it a fast storage solution? A high-capacity alternative to DRAM? Or is it something else besides?
The inevitable next step in the emergence of so-called storage-class memory is a fundamental change to von Neumann’s computing architecture. Bit-addressable storage will be pulled so close to the CPU that it will challenge the fundamental computing architecture we have been living with as long as there have been computers. Instead of a clear delineation between memory and storage, we will have a cascade of performance, capacity, and feature tiers inside and outside the system.
But this leaves us with a real problem. How do applications get from here to there? Although it seems that the state of the art for computing moves quickly, this is not the case. There is a tremendous amount of intertia when it comes to adopting new technologies and concepts, and the industry tends to lose interest just as real implementations happen.
Intel needed to bring 3D XPoint to market, and an SSD form factor was an easy first product. But the lackluster performance of those initial offerings has blunted excitement about the technology, to the extent that a decent Optane DIMM launch was met with an industry-wide sigh. Although there is much development effort currently being focused on leveraging persistent storage-class memory for future applications, it’s much less clear how it will impact existing software.
The industry is holding its breath. Will 2019 be the year that Intel and the software ecosystem release a compelling storage-class memory stack?
Gestalt IT will be on-site on April 2 at Intel’s “Data-Centric Innovation Day” in San Francisco. The company is promising to share their latest platform innovations across processing, memory, storage, and connectivity. Tune in for live blogging here at Gestalt IT all morning as head over to the Tech Field Day site for deep-dive technical presentations from noon to 3 PM Pacific time.