When a category becomes settled, a bit of tedium begins to set in. Room for innovation rapidly shrinks, and becomes more about efficiency and refinement than redefinition. That’s kind of how I felt the hyperconverged infrastructure market was settling into. At the high end of the market Nutanix led with SimpliVity being a prominent second. For midsized organization, Scale Computing seemed poised to gain adoption and has a good model for that scale. There are still marked differences in price, features, and capability between the players. But the literal configuration of hardware seemed to be homogenized.
Datrium is trying to change the expectations of hyperconvergence. Instead, they are billing their concept as Open Convergence. This is their response to the traditional issue with HCI. Their basic format is to separate bulk storage from compute, flash, and networking.
To that effect, their DVX Rackscale system provides two separate nodes. One is a compute node, which as its name suggests, hosts the compute for the system. This is the key to everything that they are doing. Instead of slavishly holding to a hard definition of hyperconvergence that demands you put everything in same box, Datrium is being smart about this. Their compute nodes contains everything that requires high end computation. This includes a healthy dose of flash storage, so that data services and management can be done. Separate storage nodes provide durable storage, without the need to needlessly scale compute or networking when you need more.
So with the basics of the convergence story down, where does the “open” component come in? Well one of the weaknesses Datrium saw in the HCI market was vendor lock-in. I mean, if you’re an HCI vendor, it’s certainly not a problem to ensure that people will keep buying your stuff. But Datrium sees it as a market opportunity and a space to distinguish themselves. To that effect, while Datrium is more than happy to fill up your data center rack with their own compute and data nodes, DVX can also support a mixture of 3rd party servers. This allows organizations to keep utilizing existing resources, while still getting a lot of the flexibility of HCI. All you need is one Datrium compute and data node to get started, which is impressively minimal.
The idea behind this approach was to allow servers to be stateless again. Traditional HCI introduced the idea of simplified provisioning, and being able to put your entire virtualization needs in a rack. But with that consolidation, Datrium sees unfortunate side effects. The main issues is that is makes your server stateful, creating management and reliability challenges, and removes a degree of independence of operation. It also creates a lot of chatter between nodes, the end result of which is reads often come in remotely, hurting performance. Datrium thinks by maintaining data services in a compute node, but putting durable storage in separate nodes, you can have the best of both worlds.
Beyond this, Datrium also designed DVX to reduce complexity of operation. This is partly inherent to making their servers stateless, in that it simply allows for less management headaches when it comes to replacing hardware. But using dedicated compute nodes means they can be a lot more efficient when it comes to data services. They have a suite of data services that are by default always on the second you get DVX up and running. These include deduplication, compression, erasure coding, encryption, snapshotting, and replication. These are all done in the compute node equipped with a sizable flash cache to not hurt performance. Before a write is acknowledged in the VM, the write is saved in NVRAM located on the data node. This is then written to slower media without compromising performance.
Encryption is done after all other data services, but then remains encrypted in-transit and at rest for the lifecycle within DVX. To reduce the performance hit on encryption, Datrium utilizes the the AES engine on the compute node Xeons. Since they’re using compute nodes from across the host, this provides a fairly linear scaling path and never allow encryption to bottleneck performance.
DVX provides the same convenience of provisioning that traditional HCI is known for, but takes away a lot of the headaches of flexibility and lock-in. Data services are included right out of the box, and you can get started with DVX utilizing a lot of your existing hardware. In a market where differentiation is in short supply, Datrium’s DVX Rackscale System stands out.
- ioFABRIC Vicinity 3.0: Storage Myth Making - August 22, 2017
- Eclipse Logistics - August 21, 2017
- Asus B250 Mining Expert Motherboard with 19 PCI-E Slots - August 21, 2017
- Apple and the Oak Tree - August 21, 2017
- QLC NAND – how real is it and what can we expect from the technology? - August 18, 2017
- Episode 8 – Wireless Misconceptions - August 17, 2017
- Dueling AMD and Intel Server CPUs, HyperGrid Brings On-Demand to the Data Center, and Old World AI in Gestalt Server News 17.8 - August 16, 2017
- Sprucing up the lab with ioFABRIC & NVMe - August 16, 2017
- AMD Threadripper X399 Motherboards RANKED (by tackiness) - August 15, 2017
- Will Killing Net Neutrality End the Public Cloud? - August 15, 2017