All Events Pure Storage Sponsored Tech Note

Pure Storage: How Do They Do This?

Seems to me that Pure Storage has uncovered solutions to problems I’ve not thought of before. I really don’t understand the anticipation that solves complex problems, and as yet uncovered. Some of the announcements from Storage Field Day Exclusive at Pure Accelerate 2019 have been astounding. They’ve been well beyond what I expected, and categorically, game-changers.

The FlashArray//C Series

To me, the barrier in moving to Pure (as a reseller’s architect), has not been the branding or even the brand, but validating the move to the customer – from spinning disc or hybrid array to all-flash. The perception is that flash is more expensive, and the potential benefits of an AFA are not enough to make all-flash truly significant and therefore, compelling. Well, Pure has made it easier, with the C Class array. Leveraging a similar architecture as the FlashArray//X, but utilizing differentiated disc, and the ability to achieve densities of up to 5.2Pb in 9 rack units of space, the FlashArray//C makes that barrier a less daunting task.

Ultimately, the goal is the create a non-Tier-1 array. They call it Tier-2, but in my opinion, it makes for a high-quality Tier-1 for the newcomer as well. While FlashArrayX and FlashBlade have consistently achieved the fastest performance numbers both in terms of disc, as well as latencies from processor to disc (more later), the FlashArray//C, already in GA, is a speedy and dense category which supports both block and file, and potentially carries the bulk of work for any newer adopter. Part of the key here is to use QLC disc, though the original release will be using TLC solid-state. To me, the statement regarding Tier 1 versus Tier 2 holds less validity than that of a launching pad into the all-flash environment.

The Analytics Backend to the Pure Family of Arrays

I’ve talked about this before, and find the idea of predictive analytics to be captivating, but to be able to run this across all platforms from the phone and leveraging the PureMeta levels of telemetry makes so much sense to me. To know what is coming before it happens, and thus be able to mitigate it before it causes an outage, is so very important. By no means the only architecture supporting an analytics engine against the storage environment, but what Pure1 has been providing truly shows an important level of data, and a compelling use-case. Again, the data compares with all the other environments out there, pushing data to a whitewashed dataset so your environment can compare with other customers that have similar architectures, in the hundreds of thousands, giving the ability to size, and prepare for changes against other workloads out there. Pure has dubbed this Workload DNA, with a machine learning backend, and given the model the ability to predict potential problems.

Modern Data Experience

Pure has created a “rental” model called the Modern Data Experience. Essentially, Pure running as a service. Scaling up or down as required, and potentially running either on-prem or even leveraging Pure1 located on AWS. Yes, you can deploy an instance of Pure1 on AWS quite simply, and extend your on-prem data storage environment into AWS, with the same management panel, and control the storage you’ve deployed on EC2 or S3 within the AWS environment, potentially improve the storage performance, etc., and manage all the true goodness in your cloud environment in the same way you’re managing your storage on-prem. I love this idea, because it allows the ability to extend your applications to the cloud. Whether those apps are traditional, serverless, or container-based, you can add the functionality of portability to these applications as appropriate. Allowing your app to reside wherever it needs to be, seems to be the brass ring in this space. I like it a lot.

I wrote last year about Direct Memory, and the promise as it was has now become a reality. I like the idea that Pure has gone toward the best-in-breed approach. So, to that end, the idea of shortening the latencies as well. So how do they accomplish this? How does Pure address the idea of improving this hardware-based latency? Particularly on the Flash-Blade, where the hardware seems bound by bus-based limitations? Placing Optane memory in the array as “read-caching” addresses a very important latency issue. It improves what is already a rock-solid approach.

Final Thoughts

I even had the opportunity to meet with Brian Schwarz in a scheduled one-on-one session. Brian is a VP in Product Management. My pleasure for certain, Brian’s willingness to share all stories, warts and all, mixed with some serious stories regarding roadmap made for a great conversation. I appreciated this chat.

A vendor that keeps expanding their space, keeps innovating, and keeps solving problems will always be intriguing. Pure has a track record of regularly improving on great design. I believe that the future of this company will be bright for a long time to come.

About the author

Matt Leib

Long time engineer and presales architect specializing in Storage, VIrtualization and Orchestrations in the cloud.

Leave a Comment