Featured NetApp Tech Talks

The Asset is the Data, Not the Storage

Data is the new currency. By now we’ve all heard that phrase or some variant of it. In an era of transformation, the differentiation in an organization comes not from the product it produces but how they leverage data to enrich the experiences of their customers.

Still obsessing over your rack full of storage and cables? Stop! It’s worthless unless the data is stores is adding value.

Data is the Asset That Drives a Modern Business

If you think Amazon is just a retailer, you’re wrong. Or you’re only partially right at the very least. If you purchase goods on the site on even a semi-frequent basis, chances are the website will display just the product you are looking for, or sometimes a product you had no idea you needed until it just happened to show up while you were browsing. Nobody will believe that was by accident. It’s understood that Amazon has information about your shopping habits, your likes and dislikes, even what time of day you are likely to make a purchase, and they put that information to good use.

A company does not have to be a digital native like Amazon for data to drive their business. An insurance company can use data to determine risk, optimize prices, or detect fraud. Professional sports leagues could use data to scout for new players, enhance team performance, and even improve player safety. Even a small local retailer can use data they collect about customer buying patterns to optimize decisions about inventory, marketing, and pricing.

It should take very little to convince an organization that their most valuable asset is their data. After all, the place where all of your servers, your storage, and your networking gear are stored is called a data center. Many enterprises are changing focus now from where data is stored, or what kind of storage it is held on and shift to “centers of data” attitude, meaning they plan their computing operations around the data that will drive value.

The Commoditization of Storage

In the past, when designing data centers or applications, a lot of care was given to ensure that the correct storage system was designed and purchased for the us by an organization or even a single application. The popularization of cloud services that allow storage to be purchased in a self-service model with a consumption based pricing model has greatly simplified the procurement process. Decisions still need to be made around availability, durability, and performance of the storage, but the underlying infrastructure is of little concern as long as the service meets the requirement of the application.

This has created a challenge for storage companies. They can no longer differentiate on characteristics that are table stakes for a cloud service. Additional value is needed beyond simply being a dumping ground for data with no enhanced functionality. Enterprise IT is transitioning away from being systems-centric and becoming volume-centric. While providing the means for storing data in a safe and performant system is necessary, enabling a business to leverage their data for multiple high value purposes is necessary.

Data Has No Value if it’s Not Accessible and Actionable

If data is the asset in our analogy, then what makes that asset liquid is the information that can be derived from it. Dumping all data you possess on your customers, or on weather patterns, or police reports for a region, or whatever the case may be onto a storage platform and doing nothing with it will simply waste money, power, and cooling. For the data to have value, it needs to be accessible to an application that can extract information and insights that will drive the business forward.

Application developers can easily provision storage from the cloud storage offering of their choice when designing a new application. There are a variety of cloud providers to choose from and within them offerings of the block, file, and object variety are typically all available. The choice may be made based on which performance, accessibility, or ease of use. Often times decisions about storage are made based on the primary use case of the data that is being collected, but to unlock the true potential of the data it must be accessible by any application that can extract useful information out of it. If the data is not stored in a location or on storage that makes it accessible or at least portable, the value decreases significantly.

What happens if your data is stored in Amazon S3 for example, and you would like to analyze it with another application that cannot communicate with S3 natively? Are you going to copy that data to EFS or EBS? Additionally, what if the application is in another region, or another public cloud altogether? The data can most likely be moved to another location or type of storage, but only with great expense and time. This is the all too familiar problem that data has gravity.

Overcoming the Challenges of Cloud Storage

The means of overcoming the problem of data gravity don’t often exist within the cloud. Typically a cloud provider will make it possible to clone or replicate a data set within a region, but the customer will end up paying to store the data twice. Additionally moving or copying data from one region to another may be entirely possible, but egress charges for moving the data end up becoming prohibitively expensive for large amounts of data. This is compounded for data that may be need to be copied or moved multiple times. To exacerbate this problem, data that needs to cross the boundary of a cloud provider to be used in another cloud provider, or even back on premises introduces a new set of challenges.

Overcoming the challenges that are introduce by storing valuable data on native cloud storage offerings could likely be overcome by a skilled developer or team of developers. But if the true value of computing is the information that can be extracted from an enterprise’s data, are development cycles used well when they are put to work overcoming problems with storage? The fact that these challenges are not unique means that solutions that are built to solve them can be used across an entire industry. If a storage company can assist with managing a customer’s data, then they have made the transition from being a storage first company to becoming a data first company.

Becoming a Data First Company

While I was at AWS re:Invent 2018 I had the opportunity to speak with NetApp and hear how they are helping their customers solve issues that often arise when managing data in the cloud. It became clear that they are focused on moving away from being a traditional storage company to a data first company that can help their customers make the best use of their data.

The clearest example of this is recent announcement of their Cloud Volumes Service for AWS, Azure, and GCP. As a product it delivers file based services running on multi-tenant, bare metal hardware that is directly connected to each respective cloud. By delivering a familiar storage platform that can utilize either NFS or SMB, there is no need for developers to rewrite existing applications. A familiar centralized interface simplifies operations and the ability to provision everything via API provides the speed to delivery that developers and organizations need to innovate quickly.

Cloud Volumes Service also works with NetApp Cloud Sync to make datasets available outside the confines of single cloud or on premises data center while also solving the problem of keeping data sets up to date in multiple locations. By continuously syncing deltas in the background, data transfer costs are minimized and the most recent data set can be made available to applications regardless of location.

Not to be confused with Cloud Volumes Service, NetApp also offers Cloud Volumes ONTAP. This will also deliver NetApp file services to cloud workloads, but runs on a combination of virtual machine and block storage resources on either AWS or Azure. This is essentially NetApp software running in the cloud and leveraging existing service to provide familiar file services and management plane to both development and operations teams. This can provide benefits to speed application development and deployment times as well as provide greater availability of data for use with other services an applications.

By offering these services as well as many others, NetApp has shifted their business model to that of a data first company. As such, they will be enable customers to derive the value out of their data necessary to transform their businesses for the 21st century. With a wealth of information extracted out of their data, these newly transformed businesses will be “rich” with the currency of data thanks to companies like NetApp that understand how to drive value in the data economy.

About the author

Ken Nalbone

Ken is an IT infrastructure professional with over 15 years experience. His areas of specialty are the software-defined data center and cloud technologies. In addition to being a writer for Gestalt IT, Ken is an Event Lead for the Cloud Field Day and Tech Field Day series of events.

Leave a Comment