What do you get when you start a company with the someone who invented IP Switching, employee #8 for Sun Microsystems, an engineer who helped develop HDMI, and a former CEO of Zero Motorcycles? DriveScale of course!
I was at Tech Field Day last week and got to see DriveScale up close and personal. Googling the company beforehand didn’t exactly reveal a surfeit of information. There was a Bloomberg basic profile, some details on funding rounds, and of course a snarky blurb from The Register. I was able to get a lot of the basics from a few TFD delegate previews, but these seemed just as hungry for detail as myself. Basically I knew they were offering some kind of storage solution involving an adapter sitting in a server rack, which is to say not much. If nothing else, I knew their company typeface was pretty cool.
They presented with their CEO Gene Banman, founder Satya Nishtala, and several other staff members. The company started in March of 2013, officially coming out of stealth this May, so the dearth of detail on the company isn’t too surprising. DriveScale’s DNA leans heavily on a legacy with Sun Microsystems, with most of the founders and staff having worked there at one time or another. There seems to be a lot of interesting and diverse minds at the company, this definitely isn’t a fly by night startup. I’ll be honest, seeing the breadth of the company’s experience in other ventures, I felt a little intimidated.
So now with the origin story out of the way, what exactly is DriveScale all about?
DriveScale wants to change how storage is considered in your datacenter. Think about how storage is added to a typical setup. If you need more storage, you throw a couple of pizza boxes on the rack, adding storage, but also compute, memory, connectivity. That’s great, if you just happen to need your storage to scale according to your vendor’s specification.
DriveScale is more interested in investing in architecture rather than simply a rack appliance. Don’t get me wrong, they have a rack appliance, and they will license it out to you, but they are designing scale out hardware management. In a world where each storage device is just a server with storage, they’re trying to bring in a little innovation. What does innovation look like in the storage world? Adding some Ethernet to your JBOD! Yeah, it might be 40-year old connection standard, but its lacks a lot of compromises of other options. PCIe doesn’t scale, isn’t symmetrical, and driver support just isn’t there. Ethernet at 100Gbit is the same speed as a 16x PCIe lane, and symmetrical. RDMA? Well the DriveScale team just doesn’t like it. So Ethernet it is!
DriveScale sees the server itself becoming an abstraction, simply a set of hardware resources composed, within domain constraints, of various compute, memory, and storage resources. Their Ethernet-attached drives allow them to disaggregate storage from the rest of those abstracted resources. Because Ethernet requires a processor on the end point, you can then move some computation to that point, which could have a big impact on analytics, something that DriveScale isn’t touting as a major benefit to their solution now, but could be expanded upon with future iterations. DriveScale is using a 64-bit dual-core ARM-based hardware interposer on the end point for this compute power. They aren’t the only company to think this way, but I still love seeing Ethernet extended this way. Throwing a bunch of bandwidth at a JBOD or JBOF allows you to bring different management solutions to the table, and DriveScale seems to have a fairly complete vision on how to get this implemented.
Here’s what DriveScale wants to offer: Simplicity with hardware infrastructure, scale out with appropriately disaggregated hardware, and optimization of those resources. They largely succeed.
Users can setup server racks in a heterogenous configuration, with stacks of dumb JBOD, so that you’re not wasting CPU on drives. The DriveScale Adapter is an Ethernet bridge to iSCSI, that controls both directions on the adapter, so its obscured from the user (thankfully for iSCSI haters). All multipathing is handled by DriveScale’s solution, it doesn’t need to be managed or configured by the customer. The customer provides servers and the JBOD, DriveScale provides the appliance to make them talk. Each rack constitutes its own storage array. The value prop is the destruction of silos between clusters, or if you don’t want buzzwords, everything is in one big storage pool that you can chose to subdivide however you want. Their ultimate goal longterm is to be able to derive insights from configurations to advise what would happen if more CPU or storage were added to their hardware pool.
Today, they’re offering their adapter as part of their architectural storage solution. Their 1U box is “dumb as a rock” (quoting the company here), no RAID or any other gimmicks, its just a data path. It has redundant power supplies, four Ethernet to SAS adapters, with 2x 12GB four-lane SAS interfaces per adapter, and two 10Gbit Ethernet interfaces per adapters. Licensing is on per node, per year, per drive. DriveScale isn’t pitching this as a cost saving solution, aiming to be cost-neutral with the initial purchase. The benefit comes on the upgrade cycle, when you’d otherwise be forced to buy redundant services on top of storage.
On the DriveScale Adapter itself, everything is hot swappable, so any hardware failure doesn’t mean a total replacement of the adapter. In an upcoming feature, the company will offer 10x SSD storage on the adapter itself to allow you to portion out all the storage for fast caching purposes. The drive slots themselves are present on the existing device, but the caching functionality it in the works, although no date for this was given at the time. They also plan to allow the flash drives for booting, to enable diskless servers for the top of the rack, although again this is still a work in progress.
So once these JBODs are setup, how do you manage it all? Their management center dashboard interface gives a topography of the JBOD configuration, with a view of health for each drive. DriveScale defines any carved out group of storage as a cluster, essentially just a group of hard drives for a task. From there, you can configure how much CPU, memory and drive to give the cluster. A cluster is a recipe, and the defined hardware are the ingredients. The management center handles everything needed for creating the cluster from the requirements. The adapter and software work together to do this, this is opaque from the user, they only see the results. Once the cluster is created, then the user can see what the actual drives are for maintenance and expansion purposes.
The biggest advantage I see is what DriveScale is heavily pitching: flexibility. In your typical server scenario, you’re pretty bound by your initial purchase decision, your storage has so much CPU and memory per drive. Yeah, you can do some upgrades, or swap out units, but often that just changes the inflexibility rather than introduces a flexible solution. DriveScale offers a really intriguing opportunity, the ability to dynamically change the what your cluster is when you need it, to easily throw more CPU at your cluster to meet whatever your performance and QoS needs are, without having to buy any additional servers. As this changes over time, you can again reconfigure the cluster within the same hardware.
DriveScale has an interesting solution. They’ve got an essentially complete, if not exactly mature, product, with an interesting roadmap of features, especially when their adapter can use onboard flash storage. DriveScale should be a fun company to watch, especially if they leverage the compute they’re adding to their drives, and get enough adoption to derive meaningful insights for future setups.
I said at the top that the experience on display at DriveScale initially felt intimidating. Turns out I was wrong. Not that I shouldn’t have felt intimidated, everyone who spoke was clearly smarter than I’ll ever be. Even in a Multiverse reality, I think in 99.999% of all given realities that doesn’t change. But really, their product strategy is pretty straight forward:
- Ethernet-connected hard drives in a JBOD.
- Connect them to an adapter as a data path, with some software secret sauce to make it easy to manage.
- Supercomputers Are Switching to Arm Too - July 3, 2020
- AWS Goes to Space | Gestalt IT Rundown: July 1, 2020 - July 1, 2020
- Ep. 8: Is Dell Selling VMware? - June 29, 2020
- Intel’s Tiger Lake Is CET For Security - June 29, 2020
- NetApp Buys a Spot in the Cloud - June 26, 2020
- Will Dell Spinoff VMware? | Gestalt IT Rundown: June 24, 2020 - June 24, 2020
- Ep. 7: What is Intel’s x86 Future? - June 22, 2020
- Midrange Levels the Playing Field - June 22, 2020
- Amazon Doesn’t Buy Slack… Yet - June 19, 2020
- Cisco SecureX Reaches General Availability | Gestalt IT Rundown: June 17, 2020 - June 17, 2020