The data center is a post-apocalyptic wasteland. Ravaged by an invasive new species called “The Cloud”, previous enterprise technology has been decimated, destroyed or completely mitigated. As fickle CTOs furtively abandon their previously hallowed IT principles, only a select few of the true believers keep the hope alive against an all out victory by The Cloud.
Even these courageous few are forever changed by the experience, for one cannot experience The Cloud and remain inert. No, to survive this powerful force, one must emulate it to survive. From the ashes of the old data center, comes a new, lean, agile force. This is one of their stories. This is Permabit.
Permabit is not a new player in the data reduction segment, the company was founded in 2000. But they are offering a new spin recently. Instead of focusing solely on supplying their solution to the OEM market, they are now marketing themselves to large enterprises. Specifically, they’re integrating with Red Hat Linux and Ubuntu to provide their well known data reduction chops directly for the Linux kernel.
This is a pretty big shift for the company. But they felt there was a market need to directly serve enterprises as well as their traditional OEM customers. Seeing the benefits that hyperscale efficiency brought to cloud providers, a lot of enterprises really want and need to make their own data centers efficiency focused. Permabit sees their Virtual Data Optimizer solution ideally suited to bring this.
VDO is currently supported on RHEL 6 & 7, as well as Ubuntu, so there’s definitely a wide Linux based audience for it. Impressively, Permabit isn’t trying to create any feature disparity between their OEM and enterprise products. What you can put in your data center has the exact same capabilities. It’s a bold step, but nice to see Permabit isn’t willing to offer a worse customer experience for the enterprise, even at the risk of possibly cannibalizing their existing product.
In terms of capabilities, what is Permabit offering? How about high speed inline dedupe on Linux based systems with a minimum memory footprint. Permabit offers some impressive numbers to back this up. They’re claiming to see dedupe of 8GB per second per node. In terms of system resources, this is done using about 268MB of RAM per physical terabyte of data, and one CPU core. Compare that to the dedupe capabilities of ZFS, which needs over 1GB per TB. They conservatively estimate this can lead to a 2.5 data reduction, with cost reductions for storage of over 50%, even including their licensing. Speaking of their licensing, on the whole it’s reasonable: $199 annually for 16TB.
I love to see an established player like Permabit aggressively change their market strategy like this. While dedupe might not grab the headlines, it matters. At Tech Field Day last month, one of the first question asked about anything related to storage was if and how it handled deduplication. The problem is that storage is hard. Technology has a pretty strong recency bias, so it’s easy to forget a player like Permabit. However with VDO, they’re bringing proven dedupe capabilities directly to the enterprise.
- Ep. 13: 3 Reasons Ransomware is Hard - August 3, 2020
- Intel Announces Reorg After 7nm Production Slips - August 3, 2020
- Garmin’s Ransomware Outage | Gestalt IT Rundown: July 29, 2020 - July 29, 2020
- Ep. 12: What Happened to Skype? - July 27, 2020
- HPE, Silver Peak, and the Maturation of SD-WAN - July 24, 2020
- EU Lowers Its Privacy Shield | Gestalt IT Rundown: July 22, 2020 - July 22, 2020
- Ep. 11: SD-WAN Goes Corporate - July 20, 2020
- Can Liqid and Broadcom Take CI Mainstream? - July 20, 2020
- Nebulon Launches with Cloud-Defined Storage - July 17, 2020
- What Will HPE Do With Silver Peak? | Gestalt IT Rundown: July 15, 2020 - July 15, 2020