To be sure, the future of computer storage is Solid State. The days of spinning disk, even as bulk storage in arrays, feel numbered. Flash memory solves performance, persistence, data integrity, rebuild times, and uptime issues. It even has a durability beyond that of traditional spinning rust. The AFA, all-flash-array, is the way of the future. As costs fall for size on a per-gigabyte scale, particularly with the inclusion of compression and deduplication algorithms, the functionality and economy of solid state is proving itself as a truly viable, financially responsible alternative.
As the growth in the industry has moved from classic SAS or SATA toward that of NVMe solid state disk, the paradigm has shifted a bit toward leveraging new standards of IO profile. This takes some of the impediment out of performance function by leveraging the PCIe bus rather than using disk controller/cabling at the array level, thereby eliminating a key barrier in performance. NVMe was one of the products highlighted at the Flash Memory Summit this year. We are seeing much greater adoption across the industry. Pure Storage, though, has launched the FlashArray//X series this year, which uses the NVMe protocol and hardware across the board. This array is the first major brand to leverage this paradigm in a fully managed, fully fleshed out platform. The Purity operating system, used in the entire Pure line, makes the management of these devices easy, consistent, and predictable across the portfolio. Purity, outlined here, ensures federation and all key modern functionalities with no additional costs.
Scalability has been a problem over time as well. Traditionally, the Disk Controller to Disk model has been replicated toward that of scalability. Connectivity has been traditionally accomplished via Ethernet or Fibre. However, a brand-new paradigm of scale-out storage has come along-side the newer NVMe protocol. The idea is that the SCSI (Small Computer Systems Interface) is outdated, and NVMe replaces that protocol with a newer one that is far more efficient. SNIA outlined a comparison here. The software has taken the same approach with scale-out as it has with in-device connectivity. What has been called NVMe-OF or NVMe over Fabric, grants a much more rapid method of binding the arrays together, and allowing for larger disk targets, with far less barrier in terms of IOPs. The arrival of standards in terms of NVMe and NVMe-OF are part of the SNIA standards arrived at recently, ensure rapid, consistent, and highly reliable storage, leveraging a newer approach to disk read/write capabilities. This is a huge shift in storage, and a very important subject that was discussed at the recent Flash Memory Summit. There’s no doubt that these key updates to storage modelling will be emphasized within the conference.
Optane and 3D Xpoint
A third category very likely to be spoken of in depth is the future of non-volatile persistent memory. The promise of this technology is critical to the future of the category of storage, and as well, the future of this technology will also be that of expanding server capacities for random access memory. As we look a little closer to the emerging technology, we see some of the promise. Developed by a pair of silicon giants, Intel and Micron, the technology, more commonly known by the brand 3D Xpoint, views the way in which data is stored on silicon in a layered and multi-tiered approach. The technology will have major implications for larger storage elements, such as vastly increased speed and denisty. Initially released as a storage, which will not only allow for even faster, more dense storage platforms, garnering massive jumps in both scalability and performance, potentially on par with the differentiation of spinning disk to solid state. The more exciting piece to me is that of the ability to add non-volatile RAM to the memory bus of the computer itself. Imagine, for example, that the entire database for your SAP infrastructure can be stored in-memory, on only one server. This will eliminate the fracturing of databases into pieces, and all the customization required to the database application, allowing it to run as one, contiguous dataset. As these databases grow, a key barrier has been calls to and from the storage layer for access to the data, as well as the code fragmentation placing pieces on separate compute layers in an effort for efficiency.
All-in-all, the Flash Memory Summit continues to be the most important industry-wide conference of the year, as silicon become more and more the primary mode of data storage in the industry. It also focuses the concept of data storage firmly into the future of the industry with new technologies, and the growth of existing ones, bringing the world of solid state forward across the current boundaries and into the next generation.
- HPE’s Technology Event & Symposium: What I Heard - November 7, 2019
- Pure Storage: How Do They Do This? - October 18, 2019
- HPE Storage at Nth Symposium 2019: What I Hope to See - October 16, 2019
- Pure Accelerate 2019: What I’m Hoping to See - September 6, 2019
- Pure’s ObjectEngine: Ensures Data Integrity and Accessibility - March 27, 2019
- Achieving a New Level for Data Storage and Cloud Fluidity - March 1, 2019
- Pure Storage Announces the “Data Hub” - October 1, 2018
- Pure Storage and VMworld US 2018: What I Expect - August 23, 2018
- VAAI, VASA, and VVols: It’s All About the APIs - August 22, 2018
- What Did We Learn from the Flash Memory Summit 2018? - August 22, 2018