Public cloud services are undoubtedly the fastest way for any individual or organization to get compute and storage resources in a pinch. No need to handle infrastructure capacity, you get what you need, and get it now. However, consuming services instead of being the infrastructure services provider creates new challenges. As the saying goes, with cloud it’s cheap to fail, but expensive to succeed.
The Challenges of Data Gravity and Multi-Cloud Strategies
The concept of “data gravity”, coined by David McCrory almost 8 years ago, is increasingly relevant in an era of exponential, almost infinite data growth. Data gravity is the concept that data has mass. The heavier the mass, the more gravitational force it applies. Just like in classical physics, heavier objects are not only harder to move, but they also tend to attract objects to them. In our case, when data reaches a critical mass, it will attract more and more data.
If we make a move to host more data and workloads on a public cloud provider, cost escalation can quickly set in. Data mass, standardization, vendor lock-in, and ingress/egress charges can make the move quite costly. And don’t forget, you need to account for bandwidth requirements. Each major public cloud provider offers direct WAN connectivity, but it comes at a hefty price and further increases an organization’s dependency to a given provider.
Another option would be to pursue a a multi-cloud strategy and try to leverage cost efficiencies where possible. But doing so often overlooks the soft costs of having to train personnel in operating two or more services that utilize totally different terminology, technical implementation and configuration / documentation steps. These disparate environments also need to be monitored. There is also an expectation of moving data fluidly, which is not as seamless as it appears. The small cost savings you might eventually achieve are offset by considerable roadblocks. Now factor in the infrastructure costs linked with establishing direct WAN connectivity with multiple cloud providers, and it raises questions about whether it makes sense to go for a multi-cloud strategy.
We still have the data gravity issue looming. Where is the gravity center? Where does data get stored? Should we go full-on to public cloud or keep most of it on-premises? Each option has its own challenges and a balance has to be found between being in control (i.e. running everything on-premises and the CAPEX costs associated with it) or going for convenience (and having to relinquish control, with a lot of extra costs that may not be immediately visible).
Increasingly relevant today is not just data gravity, but data sovereignty. Not all data is meant to be directly stored at a public cloud provider, for a variety of reasons. It might simply be because of data policies outlined by an organization. It may also be due to regulatory or legal requirements that impose specific rules as to where the data must be physically located. Many organizations want to leverage the efficiencies of public cloud providers but are not yet ready to fully let go of the level of service or support that they have been used to with on-premises infrastructure solutions.
Neutrix Cloud: A Sovereign Multi-Cloud Storage-as-a-Service Solution
What if there was a cost-effective, data center adjacent cloud storage solution which delivered enterprise-class data availability and durability, allows customers to stay in control of their data, all while allowing the data to be served to cloud workloads without the penalty of data ingress and egress?
INFINIDAT has developed Neutrix Cloud, a Storage-as-a-Service offering to address all these needs. The Neutrix Cloud architecture leverages InfiniBox as its storage backend and thus benefits from the many innovations that are at the core of INFINIDAT primary storage platform. These include seven nines of data availability, rock-solid data integrity, instant snapshots without adverse performance impact, and Neural Cache. With Neutrix Cloud, organizations get the benefits of a public cloud consumption model with the level of performance, quality, and support they have come to expect from a Tier 1 storage solution.
The great strength of Neutrix Cloud resides not only in its technical prowess, but also in its versatility. INFINIDAT makes no secret that Neutrix Cloud doesn’t just serve as an adjacent cloud storage solution (which can be used as a DRaaS solution for organizations operating InfiniBox) but also as a cloud-enabled primary and secondary storage platform that can support many use cases, such as workloads that can be spread across various public cloud providers.
Neutrix Cloud is directly connected to major cloud providers such as Amazon Web Services (via AWS DirectConnect) or Azure (via Azure ExpressRoute). These direct connections, which would usually be very costly to operate and maintain for standalone organizations, have the advantage of eliminating ingress and egress data transfer costs.
With this Storage-as-a-Service model, organizations can move applications between cloud providers or leverage compute bursting by taking advantage of the best available pricing and timing, eliminating the hassle of data movement hurdles. The primary data source resides not on the public cloud platforms, but directly on Neutrix Cloud. And because Neutrix Cloud uses a single storage management API for different clouds, there is no need to refactor any code.
Running workloads on the public cloud is easy to get started. But when it comes to data storage, the struggle is real. Enterprise customers operating at petabyte scale quickly find no cloud providers have an ultimate solution that satisfies the wide gamut of customer expectations. Organizations are expected to adapt to the framework set forth by cloud providers or to go find another solution.
This is where the beauty of Neutrix Cloud shines: it delivers all of the capabilities of an enterprise-grade InfiniBox storage system for Tier 1 applications, bundled in a cloud consumption model that helps organizations achieve cost efficiencies while maintaining sovereignty on their data.
With Neutrix Cloud, organizations can consume public cloud compute resources without having to sacrifice agility and costs to the altar of data gravity which ineluctably captures data -and hope- in the space-time fabric of vendor lock-in.
- StorONE Breaks Storage Silos with Optane, QLC-Enabled AFA.next - June 24, 2020
- Pure Storage File Services: One Step Closer to Data Intelligence - June 11, 2020
- Pure Storage AIRI – Jumpstart Your AI Initiatives - April 24, 2019
- Exploring Pure Storage DirectFlash Fabric - March 7, 2019
- Gaining Escape Velocity from Vendor Lock-In with Neutrix Cloud - April 17, 2018
- InfiniGuard – Enterprise-Class Data Protection at Petabyte Scale - April 11, 2018
- INFINIDAT InfiniSync – A World of Infinite Possibilities in Zero RPO Synchronous Replication - April 3, 2018
- INFINIDAT: Technological Foundations - March 27, 2018