Pure Storage has taken a novel approach to provide its software products in the cloud with Cloud Block Store.
Rather than simply running a software port running in a virtual machine, Pure Storage has chosen to operate at a different level of abstraction, treating cloud services as analogs of physical infrastructure components. Pure has treated EC2 instances as something more like CPUs, rather than collections of compute, storage, and operating system, and S3 storage is used as an analog for physical storage devices like flash.
This approach involves some tradeoffs, but it’s consistent with the Pure Storage approach to building its physical products.
Cloud Block Store is itself an abstract service designed to provide an outcome. Customers should not really need to care what is underneath the service, just as customers of Pure’s physical arrays don’t need to care which brand of CPU or flash is used. Customers rely on Pure’s engineering expertise to make good choices for how to construct their product to meet the design goals of the product.
Pure’s physical arrays couldn’t deliver the same kind of storage performance if they just used generic whitebox x86 servers with SSDs in them and a stock operating system. Instead, Pure takes standard components and augments them with custom software, specialised interconnect hardware and direct flash access to provide vastly better performance than what customers could achieve by assembling their own systems and simply purchasing the Pure’s Purity operating system to run on it. It’s a clever combination of mass-produced, generic components and Pure’s special expertise.
By investing in understanding the cloud components available, Pure’s Cloud Block Store team is using the same skill and expertise Pure’s engineers use when they learn the intricacies of the specific flash chipsets Pure uses. This deep understanding means they can make good choices about how to combine components to build the product.
By understanding the cloud components available, and choosing the right combination of them, Pure Storage can deliver a service that is optimized for its intended use, rather than a generic service that anyone can get by obtaining the raw components themselves.
Customers are looking for an outcome, not technologies, and Cloud Block Store is designed to provide that outcome. Pure’s engineers have put in the hard work so that customers don’t have to.
Cloud Block Store in Multi-Cloud
This component approach allows Pure to adapt its product to provide the same customer experience even when the underlying components change. Pure can change the brand of flash storage in its arrays and customers don’t need to care. With Cloud Block Store, Pure can change cloud provider and still deliver the same experience because it invests in optimizing the combination of cloud components.
Bringing Cloud Block Store to Azure means looking at the compute and storage options that Azure makes available, and adapting the combination to suit.
This gives customers confidence that they can use Cloud Block Store in their choice of cloud location and not have to worry about missing out on performance or functionality because Pure provides the experience they’re looking for. Even better, as the various clouds change the components on offer, such as by adding new instance types or changing the durability of backend storage services, Pure can adapt the Cloud Block Store service to take advantage of advances that make sense, while ignoring others. Customers don’t have to worry about the bewildering array of instance types on offer, because Pure takes care of it.
Customers benefit in a couple of key ways: Firstly, customers don’t need to worry if they’ve made an optimal choice because Pure works on the optimization problem on their behalf. Secondly, customers can safely change their minds later if they decide a particular cloud location isn’t working out. While there can be benefits to specializing on a single way of doing things, such as going all in on AWS or Azure, it can be very difficult to know that you’ve made an optimal choice.
With the pace of change in cloud, there’s substantial risk that what was optimal six months ago is no longer optimal today. Operating at a higher level of abstraction helps insulate customers from that risk by reducing the switching cost if customers decide to move from one cloud to another for whatever reason.
There’s a further, often overlooked benefit: when there’s a risk of making an incorrect choice, customers often delay making a decision and spend time and money researching and testing to reassure themselves that they’re not making a poor choice. By removing this anxiety, customers can proceed with projects that deliver business benefits sooner. The cost of delay is substantial when your competitors are forging ahead at speed.
- Operations as Usual - January 21, 2020
- Operations in a Hybrid-, Multi-Cloud World - January 15, 2020
- Building on Cloud Infrastructure - December 19, 2019
- Managing Data at Scale with NetApp Fabric Orchestrator - June 18, 2019
- Data Orchestration In A Cloudy World - May 1, 2019
- Panning For Data Gold - May 25, 2015
- How To Classify Data - April 13, 2015
- Data Hoarders of the World - December 29, 2014