- Pure Storage – You’ve Come A Long Way
- A Conversation with Jason Nadeau
- Discussing FlashArray//X and AIRI Mini with Matt Kixmoeller
- //X Gon Give it to Ya
- Green is the New Black
- The Case for Data Protection with FlashBlade
- Harnessing the Power of Solid State
- What Did We Learn from the Flash Memory Summit 2018?
- Pure Storage and VMworld US 2018: What I Expect
- How a Storage Company Approaches Containers
- Pure Storage and the State of VVols
- Pure Storage Announces the “Data Hub”
- Pure Storage Gets Cloudier
- Pure Storage Isn’t About All-Flash Anymore (and Never Really Was)
- Let’s Take a Look at Pure Storage StorReduce
Since vendors started adding Solid State Disk (SSD) into storage arrays, which is considerably more expensive on a cost per GB basis than spinning disk, but much faster, the industry has made attempts to drive down costs involved and produce as much performance as possible from each SSD dollar spent. Initially the high cost of SSD required using it only in a caching functionality. Some file systems took this approach to isolate the read cache from the write cache and pushed all colder data out to spinning disk.
As pricing began to drop on SSDs, the low cost enabled full-scale adoption of these disks in consumer products, as well as enterprise grade storage systems. This led to the concept of the all-flash array (AFA). Manufacturers would take differing approaches to extract as much of this power as they could in an effort to leverage the raw horsepower of the solid-state format. Other barriers to performance and I/O became significant. It was discovered that the distance from the CPU of the SSDs themselves as represented by the use of SAS or SATA interfaces, which required a disc controller and cabling between the motherboard and the disk became a bandwidth choke point. Faster controllers became available with technology advances, as well as the use of SAS multi-channel as had long been used in traditional spinning disc architectures improved on the data integrity and scalability of the SSD profile, but still, the distance between the storage and the processor introduced latency that slowed performance and didn’t allow users to take full advantage of their solid state investment.
Within the last few years, the concept of Non-Volatile Memory Express (NVMe) became available and that helped with the latency associated with the distance issue. NVMe is a logical interface specification and protocol to handle the access of drives now being built with a PCIe bus rather than SAS or SATA bus. By eliminating the drive controller and relying on a highly optimized software protocol to access and manage that data rather than an additional controller, cabling and older disk access protocols, the latency issues could essentially disappear.
Enter Pure Storage. The first storage vendor to fully leverage NVMe on its storage arrays. As its original product, the M-series AFA, matured to the X-series array, the entire problem of the latency introduced by the distance between processors and disk completely disappeared. The X-series, somewhat of a revolution in the storage world, is able to accomplish the goals of using NVMe SSD in the array on a complete storage appliance, with access to the disk via NVMe. The specs on the array are amazing. With up to six petabytes possible in a single six Rack Unit (RU) array there is virtually no barrier to performance and the solid state investment can (finally) be fully realized. This shared accelerated storage array has basically no competition in the larger scale SSD AFA market.
Since the largest workloads today are those hosted by VMware, integration into the full stack of VMware API’s are critical. The VASA or vStorage API’s for Storage Awareness list of key functionalities is mission critical, but perhaps today, none more so than that of the VVols approach. “Virtual Volumes” are a way in which the storage profile, particularly location and IO parameters per virtual machine, are able to be isolated to each server instance. This grants the administrator ability to isolate the workload and allocate as much resource as it requires. I envision it as a QOS (Quality of Service) mode for the storage side. Traditionally, the IO profile has been per physical storage unit, or LUN, but now, the IO profile has become far more granular and is itself virtualized with the inclusion of these API’s. Some of the most profound development at Pure Storage, has been around this series of protocols. By no means has this been an easy accomplishment, but Pure has been dedicated to making this a priority.
Taking the full complexity out of managing the storage, which is an inherent problem on many other platforms, is Purity. The Purity operating system comes complete, with no requirement of additional software components or licensing. Pure has truly created a storage appliance with all the performance, all the power, and none of the complexities that are traditionally associated with large scale environments. Compression, deduplication, replication, which are all typically offered only at additional cost beyond the initial software license, come as part and parcel with the array at no additional cost.
Support includes the Evergreen model, which states that with each major processor revision (roughly a 3-year cadence), Pure will come in and replace the servers which act as HA controllers in the environment. This is a fully non-disruptive, no-downtime upgrade to the environment. They’ve truly thought of everything here. It’s all about what is the best conceivable way to deliver this kind of performance consistently over time to the customer. Effectively extending the life of the array and eliminating the hassle of migration off of the older array and onto the newer platform ensures that customers continue to get value from the array as long as it stays in place.
With full support for all VMware API’s including VASA and vVols, the platform has been created with all the features necessary to futureproof the customer’s environment for VMware. There’s also full container support for environments exploring this hot innovative technology. A new architecture containing a full Active Cluster environment can be created such that high availability across geolocations, and an entirely flexible development environment that will allow for support of new protocols, new architectures, and allow for the customer’s environment to remain viable regardless of whatever technologies that customer base will support or require moving forward.