All Syndicated

Investment Strategies and Virtualisation

I sat in a meeting today where the subject of how often you refresh your storage infrastructure came up. I know that many companies are working on a three-five year model but we were discussing whether this should be increased to seven and what needs to happen to make this so.

There are a few reasons why we were coming to this conclusion; firstly spinning rust in the Enterprise is probably at it’s peak and actually anything over the current maximum size of a spindle has potentially limited use i.e anything over a 1-2 terabyte drive is not especially useful for shared storage infrastructure. Please note, I say shared storage infrastructure!

Larger drives may still have a place to play in your archive tier but even that is debatable. And if you look at most Enterprise end-user desktops; they often have rather small local drives. It is the home-user and their insatiable demand for storage which really drives the size of spindles now.

We also know that the performance of the spinning rust is probably not going to improve dramatically. So what does change? Well, yes we have the introduction of SSDs and a couple of things mean that a four-five refresh cycle for that technology is probably sensible. And then there are the storage controllers themselves; these don’t especially wear out but technology does move on.  

But the current designs of arrays mean that when we refresh; we are forced to refresh the lot. We are also forced to refresh by overly inflated maintenance costs. Let’s be honest; most refreshes are justified by cost savings on the OpEX i.e maintenance. Even if I go to a virtualised infrastructure as espoused by HDS or IBM; these maintenance costs still mean it is often more attractive to refresh rather than sweat the asset.

However the current economic climate means that we are now more closely beginning to examine the model of keeping things for longer and examining our maintenance budgets very carefully. Dropping maintenance for software which is now stable and at terminal releases; potentially talking to third-party maintenance organisations who are much more willing to support legacy kit at a reasonable cost.

And we are considering strategies which enable us to continue to make use of kit for longer. VMWare’s announcements today allowing replication and thin-provisioning at the hypervisor layer for example.   So funnily enough, EMC have come round to external storage virtualisation; you just buy it from VMWare as a software product.

It’ll be interesting to see what other traditional storage related functionality makes its way into the hypervisor. And at what point EMC realise that they are actually selling ‘traditional’ storage virtualisation but as a software product and at which point that they do become a software company.

Funny old world, as EMC slowly catalyzes into a software butterfly selling storage virtualisation, Oracle becomes a hardware grub. And in the space of a week; EMC ‘kill’ DMX with V-MAX, then they kill V-MAX with vSphere. Now that’s what I call progress!

About the author

Martin Glassborow

Leave a Comment