Syndicated

Block Storage Virtualization

For my first posting I really want to talk about block storage virtualization. I really think that 2008 will be the year that people start to roll this out in production in a serious way. Why? It’s the money stupid!

Yes, that’s right, with the economy getting tight, I suspect that IT budgets, even those for storage, are going to get slashed. So, how are storage managers going to do more with less? You don’t think that with the budget cuts there will also be a reduction in the growth of storage/data do you? Of cource not! The business will simply expect the storage team to do more with less, that’s all. Simple really, don’t you think?

What this will mean is that storage managers are going to be looking for a way to drive the per-GB cost of storage down even more. For many I think that the answer will be block storage virtualization.

Why? Well, I think that there are a couple of answers to that. First off, one direct way to reduce CAPEX will be to drive down the cost of the array’s themselves. How? Easy, more competition. If I virtulize the storage, then the array becomes even more of a comodity than it is today, thus driving down the price. It’s basic economics really. The more vendors I allow to bid on my next 100TB storage purchase, the lower the price per GB should be, right?

Also, if the real “smarts” is in the virtualization controller, then I don’t need it in the disk array, so I can save money on licencing the software in the array. I no longer need to buy replication software from each storage vendor, I have a single replication mechanism which is probably in the virtualization controller itself. More in this in a later post, I think it’s going to have a huge impact on the storage vendors going forward.

I also think that I can achieve some OPEX savings by having more efficent operations and fewer outages. Think about it, if all of my storage admins work with a single tool for provisioning, replication, etc. then I have more people with the same skill set, all working in the same interface. That’s got to be more efficient and less error prone than having a couple of folks who know the HDS stuff well, and a couple more that know the EMC stuff well, etc.

You had this option before by just buying all of your storage from a single vendor, the trouble with that way of approaching things was that I also had vendor “lock-in”. The vendor knew that they had me by the short hairs. Where this really showed up was not in the per/GB price of my storage, or my storage software. I mean, anyone with two brain-cells to rub together knows that if you are going to get everything from a single vendor you better lock in your discount up front, and it better be big. But trust me, the vendors made up for those big discounts via things you didn’t have them locked in on. Professional Services, for example. At any rate, virtualization gets me out from under all of that, makes provisioning something that anyone on the team can do at any time following the exact same processes and procedures. You have to believe that will have a posative effect on you OPEX costs.

So, if 2008 is the year of block storage virtualization, what about file virtualization? We all still do NAS right? More on that next time.

–joerg

About the author

Joerg Hallbauer

I am a long time data center denizen who currently focuses on storage and storage related issues. I have worked on both sides of the fence, for vendors as well as being the guy who had to impliment what they sold me. Finally, I've managed teams of UNIX, Windows, and Storage Admins.

Leave a Comment