Is your data on-premises or in the cloud?
A continually changing enterprise architecture, which was once reliant on physical data centers has already adapted to the demands of the cloud. Enterprises increasingly have data both in their physical data centers and in the cloud, and they need an IT standard to move data between the premises and cloud. The current global crisis threw a wrench into the management of that data.
So, what do you do to best manage your data in the current environment? Well, one step is to adopt a disaggregated architecture.
I got a chance to sit (virtually) with fellow blogger Max Mortillaro, Vaughn Stewart of Pure Storage, and Lee Dilworth of VMware to talk about disaggregated architecture for enterprise hybrid cloud. We talked about the need for the architecture, who it benefits, and why companies should look into this architecture.
Having a physical data center and a cloud can be debated heavily for both sides. But the biggest point of debate should always be about the data – where it’s been, where it’s going, and what condition it is in when it gets back.
I have long felt that the best solution for storing data is to keep the data in direct-attached storage (DAS). This gives you a landing spot where the local IT administrators can analyze and report on any irregularities.
But am I wrong? Is there a better way that keeps costs low, has efficient resources, and keeps the information pristine?
There are many advantages for enterprises with shared storage area networks (SAN) – including enterprise data management features, efficiencies, and lower total cost of ownership spread across many applications in the data center.
As I have been in many data center builds in my career, I know that architecting the right data center is crucial for any company. Of course, like anything else, it’s also ever-changing.
What is Disaggregation?
Simply put, it’s the ability to utilize and optimize resources as you need it. Separate compute from storage, wherever it’s located, and make your infrastructure run smoother.
At the end of the day, all information can ultimately end up in your data center, on your DAS, giving you the total cost of ownership (and warm fuzzies). Your hybrid cloud can be set up and used in a converged disaggregated environment or in a hyperconverged environment.
But is this the right move?
“There is no right or wrong way,” says Vaughn. “Everyone needs to apply to their deployment scenario, management framework, and make a decision.”
Pure has been working with VMware Cloud Foundation, bringing in converged infrastructure with disaggregated best-of-breed storage, server, and network components. This helps customers extend VCF onto the fabric of their choice including Fibre Channel, iSCSI, and now with VMware’s vSphere 7 support for NVMe-oF for high performance and high-performance density. It helps gain an advantage, such as optimizing compute and storage resources independently. Not only that; you can optimize for performance and availability as well as provide dynamic, resizable software-defined storage pools.
“Cloud Foundation has a simple concept called ‘workload domains’,” Lee notes. “Underneath that, there has to be some kind of resource layer, a concept called the ‘Principal Storage Support’ (currently based around HCI and vSAN). As the product has started to take off, and we looked at a wide variety of use cases, we found customers started to ask for choice. That’s the kickstart to external storage support.”
This new framework with customization is a “whatever” approach. In some cases, it might just be a repackaged idea with a smattering of cloud. In others, the whole data center is demolished to deploy more socially distanced cubicles.
A positive to disaggregation is if companies merge, these clouds can be linked together like building LEGO® bricks.
Another advantage is simply cost. Vaughn pointed out that there is no lower-cost architecture than a disaggregated environment.
And there is some truth to that.
By separating the storage and compute nodes, any resources that are not needed can be removed. Storage does not need a hypervisor and compute nodes don’t need the compute stack.
CPU can compute functions, and storage can be refreshed without bogging down the system. Newer SSD technology can be tested and implemented faster, and the business doesn’t see as many interruptions.
Other advantages can include instant flexibility to resolve issues, and the ability to keep all hardware and software current without interrupting the workday. Wherever you work from.
Keep in mind that even twenty years ago, IT professionals that had remote staff knew we needed a solution to keep data safe and employees working. Seeing the global network grow through time, there are only two questions that need to be asked: “When will it happen”, and “With what infrastructure”?
As Vaughn said: “A decade prior, I led a lot of initiatives around adding support for file services (NFS) into the VMware landscape. At the time of that conversation, there were material differences on what you could do on SAN vs. NAS. Fast forward 14 years later, and I have companies telling me they can only do this on NFS. I am the guy that helped you adopt that, now let me try to reprogram you because everything that was true then is not true now.”
It’ll be interesting what we will recommend in the next 20 years. Maybe we’ll be in VR workspaces and data will have to travel to Mars or beyond.
As for disaggregation, it’s a simple term that can mean a whole lot. To learn more about optimized hybrid cloud with VCF, please watch the video below, visit the Pure Storage blog or view this whitepaper.