- Building on Cloud Infrastructure
- Pure Storage’s Modern Data Experience
- Operations in a Hybrid-, Multi-Cloud World
- Operations as Usual
- Availability and the “Modern Data Experience”
- What Makes Ransomware Such a Terrifying Attack?
- Recovering From Backup: The Cost of Delay
- Pure Storage FlashBlade Purity 3.0
- Cloud-Native Needs Object Storage
- Taking Back Control of Your Cloud Storage Strategy
- Pure Storage’s Purity 6
- Vanishing Containers and The Persistence Of Memory… Er, Storage
- Pure Cloud Block Store for AWS
– But does it need to be in the cloud?
Many associate cloud-native applications with object-based storage. Rightly so, because object storage can store large quantities of data cheaply, and services like AWS S3 were one of the earliest forms of cloud data storage available.
As you may expect, not all object storage services are created equal, even though there are unofficial standards for object storage compatibility (like S3 API compliance). The biggest differences between services are seen in terms of performance and the associated cost.
And we all know storage performance is a large factor of how well an application scales in production.
Production Applications Need Consistent Performance Regardless of Storage Type
Regardless of whether the storage is file-based or object-based, production applications need consistent performance. And while most object-based storage services have good enough performance starting out, it’s the performance at scale that matters. Especially when the application requires both high-performance object-based and high-performance file-based storage to keep up, consistency is hard to find.
This is not limited to just the customer-facing ‘production’ applications. A growing set of similarly critical applications are tools that support developers, data scientists, and platform and infrastructure engineers in their day-to-day work.
Performance of the pipelines they use to build, run, and operate applications is crucial for reducing the time wasted by these teams waiting for jobs to complete and getting new versions of code deployed to production faster while fixing customer-facing bugs or solving performance and security issues.
This means that these systems that support these DevOps teams are becoming more important, and while not customer-facing, are still critical systems for those teams. And the number of teams using similar approaches is increasing, with analytics, machine learning and artificial intelligence, and even infrastructure and platform teams adopting ‘as code’ methodologies.
Not unsurprisingly, the infrastructure running the pipelines matters. Specifically, certain parts of the pipeline can take advantage of object-based storage, while other application components fare better by using more traditional file-based storage. But they share their requirements of consistent performance at scale.
Let’s clarify this by using an example. Most CI/CD pipelines test and build an artifact of some sort, like a compiled binary or a container. Code is pushed through various tools in the pipeline for testing the code for performance and security issues while testing if the new code works and behaves as expected and functions as expected when co-operating with other software components. At the end of the pipeline, the resultant artifact is stored in a repository. Many different artifacts and versions of each need to be efficiently stored and ready to be used in production deployments.
JFrog Artifactory is one such tool used in the pipeline that is designed to manage artifacts built in the CI/CD process, providing version control and other features. Artifactory uses a combination of a relational database for the index and data models and object-based storage for the artifacts.
A one-size-fits-all approach to optimizing this application’s performance would not work. The database needs file-based storage, while the artifacts are best stored on scale-out object-based flash storage for performance and scalability. The database is hard to scale out from an application perspective and performance is largely dependent on storage performance. The file repositories can benefit from multiple scale-out object buckets running on multiple underlying array nodes, which avoids any additional overhead to manage server sprawl at the application layer.
It’s Not ‘Or’, It’s ‘And’
With many applications requiring a mix of object and file-based storage, storage management across these different storage types runs the risk of becoming scattered, defragmented, and inconsistent, which takes up a larger share of the storage admin’s time.
With many different storage systems and services across on-prem and cloud, organizations run the risk of having gaps in data protection, compliance monitoring, and other storage management tasks.
But perhaps the most important storage management task is being able to guarantee performance, which is much harder when dealing with these hybrid applications requiring both file and object-based storage.
With FlashBlade, the industry’s first unified fast file and object storage platform, which supports NFS and S3-compliance object storage in a single array, managing storage performance is much easier, as the system allows scale-out performance for both file-based and object-based storage with the same array.
In addition, the mature storage management features data reduction, array-level snapshots for data protection, and data tiering to optimize storage cost. The object bucket replication feature in FlashBlade allows you to archive old and less frequently used datasets to a slower, cheaper object-based storage service for compliance and better ROI.
Leave a Comment