Earlier this month, Texas Memory Systems announced they had acquired the intellectual assets of Incipient, a company that produced SAN virtualisation hardware and software. With Incipient gone, EMC hardly bothering to mention Invista, what is the future of SAN LUN virtualisation?
I talked about Incipient last year, here and here when discussing the costs of performing migrations. As I said at the time, I couldn’t see how much of a saving deploying their iNSP would bring to the burdensome migration work we all have to manage on an ongoing basis. So there’s got to be a more compelling benefit out there for using virtualisation products. If there is, then what is it?
Excluding the defunct Invista, that leaves Hitachi with Universal Volume Manager (UVM) and IBM with SAN Volume Controller (SVC) still in the market place. From experience, I know UVM is a great product and surprise, I’ve commented on that recently too especially herewhere I reference the fact that Hitachi are offering UVM for free. Clearly, the drawback to UVM is that it is integrated into the array itself. When the NSC55 first came out, I heard rumours that it may be a diskless virtualisation “head” and although it can be deployed in that way, it isn’t sold as that. If Hitachi decided offer the USP VM or its successor as a diskless virtualisation controller, it would put them squarely in competition with SVC from IBM.
Earlier this year I was fortunate to have an invitation to meet Barry Whyte, “Master Inventor” and Performance Architect on the SVC product. You can find Barry’s blog hereif you’re already not subscribed to it. I highly recommend it especially for understanding the in’s and out’s of the SVC itself. During my trip I got to see some of the hardware used to do interoperability testing of SVC – with storage it virtualises as well as servers it connects to. It’s by no means a trivial task; there are 80 people in Hursley alone, working on development and testing of the product as well as a further 64 scattered around the globe. Obviously virtualising storage is a complex business and requires huge amounts of testing. I’d go as far as suggesting that the testing takes way more cycles than writing the code itself.
What’s all this got to do with the future of virtualisation? Well, I think it highlights what a complex process it is. Even though standards for interoperability exist, IBM (and presumably Hitachi, EMC and at one time Incipient) have to deal with complex interoperability issues and interleave that with additional features and functionality whilst guaranteeing data integrity. The slide taken from an SVC presentation deck gives you an idea of what’s involved. Thanks to Barry for permission to reproduce this.
Both Hitachi and IBM have been successful with a virtualisation product that doesn’t sit within the SAN fabric itself. This seems to me to be counter-intuitive as I’ve always thought the fabric was the right place for virtualisation. After all, every I/O leaving a host hits the fabric first and this naturally becomes the best place to route the I/O to its final destination, whether or not that is a “real” LUN or one created from a virtualisation product.
Perhaps SAN fabric virtualisation was simply too complex and costly to deploy. After all, recent history has told us that paying for a fabric-based virtualisation product is a non-starter otherwise we’d see more Invista and iNSP. Perhaps fabric-based virtualisation didn’t provide the feature set that mature IT organisations required from the technology. Either way, virtualisation in the fabric needs a rethink. Maybe FCoE provides/provided that opportunity?