All Exclusives Rubrik Sponsored Tech Note

Can a Backup Appliance Really Help your Enterprise Cloud Strategy? – Part 2

  1. Rubrik’s Doing All the Boring Enterprise Backup Stuff
  2. Can a Backup Appliance Really Help your Enterprise Cloud Strategy? Part 1 – Foundations
  3. Is Rubrik Really a Cloud Solution???
  4. Startup Funding – An Example with Rubrik
  5. Can a Backup Appliance Really Help your Enterprise Cloud Strategy? – Part 2
  6. Driving your Public Cloud Strategy with Data Management
  7. Automation Empowered Backups with Rubrik
  8. Cake and Developer Gold, with a Side of Data Management

Your Cloud Adoption Strategy

In my day job, I’m a consultant who specializes on taking organizations from more traditional models of IT service consumption to cloud-based models. Shockingly if you read a lot of the tech marketing out there, my typical customer doesn’t just move everything to the cloud. In general, they all follow similar paths to the adoption of cloud, enablement of cloud consumption, and management of these new areas.

The more that I work with different companies, the more that I realize that nearly all have similar desires to leverage cloud components to help deliver these services or at least offload some of the more trivial daily tasks for the most advanced technical folks on staff. Be it leveraging public components or simply enabling more cloud-like models of delivery, every journey starts somewhere. This is why it’s important to set a baseline as an organization of what cloud will look like for your company.

When working with a customer, I have a belief that cloud is simply two things: Cloud is where your data lives. Cloud is where your applications run. From there, it’s your technical team’s job to collaborate and deliver services and a service delivery model that aligns with your business’s needs.

Typical Cloud Adoption Patterns for Enterprise

In any organization, cloud adoption journeys typically align to multiple phases. While each journey is different, there are surely commonalities that I’ve found with many of the projects. For the purposes of this post let’s assume that were going to enable basic service consumption of cloud resources with perhaps a sprinkle of lightweight application refactoring. The graphic below illustrates what this might look like for a company that’s new to public cloud consumption:

The first phase generally deals with how to start consuming very basic public cloud services. This is usually focused on establishing base offerings that help to simplify and augment already deployed applications. Businesses typically use this phase to select their first public cloud provider and gain internal knowledge of how that cloud platform works. This first cycle of service selection and onboarding serves as a guide for how the following phases are likely to be executed.

Phase two builds on phase one and typically involves building out IaaS based environments similar to what is leveraged today on prem. It’s important to note that in this phase, and all future phases, many groups are leveraging lessons learned around security and compliance to being auditing the new public cloud services that are being offered.

In phase three, organizations are typically ready to re-factor and adopt public cloud as the hosting ground for new development and test environments as well as low-level production workloads. This phase commonly includes lightweight application refactoring, where existing applications are rebuilt to take advantage of native cloud offerings where appropriate. Moving a database from a Virtual Machine in a vSphere cluster on-premises to a managed SQL service like AWS – RDS as a part of an application refactoring is a very common use case.

Now that we’ve discussed a very common pattern for cloud adoption, let’s talk about how something as simple and seamless as a backup appliance, fits into this strategy.

Phase one

Phase one often aligns with an organization’s desire to leverage cloud services without the overhead of completely re-learning how to manage and operate existing applications. Opportunities for quick-wins in this space are often associated with moving messaging and productivity services to SaaS platforms like Microsoft O365. The keys here are that businesses are offloading some of the more commoditized technical offerings to providers who are only focused on delivering these services.

From an application perspective, little change is noticed by the business and technical staff workload on trivial platform support services is minimized. The other area that we see this movement is in directory services. From a pure infrastructure perspective optimization of existing practices without breaking the technical bank are wins here. What’s a simple win that helps align with phase one?

Leverage a cloud strategy to get rid of tape!

Tapes are among the cheapest backup medium available today, but they require human intervention, they require a place where they can be physically cataloged and archived, and they have a shelf-life. Sure, manufacturers say they’re good for 30 years – but in all reality, I’ve witnessed bad tapes that were under 5 years old. Maintained by a very well know archival service.

Imagine that we were asked to recover a database or a virtual machine from tape that’s been archived offsite. First, we’d have to first find out what tape, or set of tapes we need to bring back online, and then perhaps order them from our offsite tape storage facility. That lead-time that is needed to order your tape for archival recovery is always immediately eating into your Recovery Time SLA. This is not a new challenge in the Enterprise space and there have already been vendors who have fought and won battles in this space to remove this archival dependency on tape. For years secondary storage has existed just to circumvent these sorts of issues.

What happens when you’re sick of paying for or maintaining a stack of secondary storage in your datacenter? Even better, what happens if you’re completely out of space in that closet that runs your remote site’s infrastructure? How can you archive your backups somewhere while still having the capabilities to restore. Rubirk’s answered this question with an archival solution capable of pushing out to any S3 compatible endpoint.

By leveraging archival policy to S3, Rubrik will tier data by policy that’s older than the number of days specified to an S3 compatible endpoint of your choice. Many organizations will look at this capability and leverage it two-fold. One, long-term retention of archive files without the overhead of dealing with tape. The second is to provide an offsite copy of data to have on-hand in the event that a total datacenter disaster happens.

In my opinion, the former is a great use case. Recovery of a file or a database from an archive dataset would be easy with this structure in place. The latter, which in many cases seems to make sense, is somewhat troubling. I’m not sure many organizations are willing to take the time to completely recover a data-set that was off-loaded to a cloud provider back to on-prem infrastructure.  In many cases, the data copy time makes this a non-starter.

Onwards to Phase 2, BUT FIRST!

So far, we’ve touched on common patterns typically seen in organizations that are beginning to adopt public cloud services. We covered how easy it is to leverage an S3 account for storing your backup archives in the public cloud. In my next post, we’ll dive into how you might leverage a great platform like Rubrik to begin making use of public cloud compute services. We’ll also discuss how enabling new services in the public cloud requires a second look at your organization’s security, governance and compliance requirements.

This post is part of a Rubrik Tech Talk series. For more information on this topic, please see the rest of the series HERE. To learn more about Rubrik, please visit https://www.Rubrik.com/.

About the author

Tim Carr

RHCE – RedHat Certified Engineer

VCP5-DV – VMware Professional Datacenter Virtualization

VCAP5-DCA – VMware Advanced Professional Data Center Administration

VCAP5-DCD – VMware Advanced Professional Data Center Design

VCDX-DCV – VMware Certified Design Expert – Data Center Virtualization

Specialties: Infrastructure Architecture, Storage, Virtualization, Automation, Configuration Management

Leave a Comment