Hyperconverged infrastructure has been around for a while. We’ve seen companies go public on the strength of the market, and companies get acquired for the same reason. It’s a way to simply the often complex world of provisioning and managing a virtualization infrastructure. But HCI has been around long enough that the limitations of the model have become clear to the enterprise. Any new entrant to the crowded market should have solutions to those problems.
Today, NetApp announced their entry into the HCI market. In their messaging, they hammered in on those limitations.
Don’t Hate, Validate
NetApp is keen to illustrate their differences from “first generation” HCI right out of the box. To that end, they’ve done some work to eliminate human error and reduce initial setup complexity. The initial setup requires the creation of a single login, and automatically prompts the creation of a vSphere client, or credentials for an existing one. From there, the system actively validates the IP and MAC addresses used for setup, to avoid troubleshooting over a typo. This doesn’t just check formatting on these either, NetApp HCI checks to make sure IP addresses aren’t already in use as well.
Tipping the Scale
One of the other differentiators with NetApp HCI is their focus on scale. Every HCI solution under the sun promises to make it easier to scale storage and compute, but rarely independently of each other. HCI also tends to be expensive, because each additional box adds in another scale of licensing requirements for data services, which are often determined by cores.
NetApp HCI instead offers independent scaling of storage and compute with discreet nodes in a typical 2U chassis. These nodes are tiered into small, medium, and large, to give you a lot of flexibility.
The minimum deployment involves two chassis with two compute and four storage nodes. From there, you can scale as needed. In their announcement, NetApp didn’t go into details about their licensing, but based on their criticism of existing HCI, it doesn’t appear to be per core, instead based on usage. This combined with the flexibility of provisioning should go a long way into making it more cost effective at scale.
Integrated with Data Fabric
Perhaps more importantly, NetApp HCI ties into the company’s great Data Fabric ecosystem. This provides a variety of data and file services for your data, but has other implications as well. Perhaps most importantly is the ability to consolidate workloads. Instead of creating disparate silos which require more management and resources, NetApp HCI is more fine tuned than that. NetApp is able to dynamically allocate and manage resources to guarantee performance independent of capacity. This is because their HCI can assign parameters per workload, allowing for QoS specifications of minimum IOPS to prevent performance falloffs.
I don’t know if this completely eliminates the noisy neighbor, but NetApp seems pretty confident that they can maintain consistent performance on HCI without the need to over provision and silo applications.
Being Data Fabric ready also allows NetApp HCI to easily integrate with the company’s SolidFire and ONTAP offerings. While we’ve recently covered that HCI and hybrid cloud aren’t the same thing, NetApp has recently expanded their tools to make it easier to move workloads and data across clouds, and their HCI solution only plays into that ability.
NetApp HCI is another move emblematic of the company’s embrace of cloudification. It’s really a hardware extension of their entire Data Fabric concept. On top of that, NetApp looks poised to exploit the second mover advantage on HCI. The category has gone a long way to make virtualization easier for organizations to embrace. NetApp HCI keeps all that, but specifically targets three weak spots: setup, scale, and cost at scale. Definitely a well thought out launch into a crowded market.