Despite the inevitable EMC spin, I found myself nodding in agreement with this blog entry from Barry Burke. Wide-striping is now just another feature; it’s a very important feature but just another feature now.
3Par took wide striping and made it useable; EMC’s historic implementation using metas and hypers was painful and with the large arrays of today it becomes a full time job to performance manage an array. 3Par made it easy and much kudos to them for doing so. I think 3Par’s legacy will be the ease of management that they have brought to the Enterprise array (and thin provisioning).
I think it is worth pointing out to Barry, that you can simply use Wide-striping without Thin-provisioning with a 3Par box as well. LUNs do not need to be thin-provisioned and can be entirely pre-allocated.
Automated wide-striping simply makes the storage admin’s job easier; it de-skills it somewhat and hopefully it will bring an end to the endless poring over of spreadsheets trying to balance workloads.
SSDs will become just another feature with time as well; Barry wants this to be the case. It validates the decision to put SSDs into the DMX. Any good idea will eventually just become another feature if it is good enough and even if it is patented; people will find their way round eventually.
As Barry points out SSDs deliver massively increased IOPs and massively decreased response times; we need this, we desperately need this for some applications. Even if the magnetic disk manufacturers could get their disks to spin faster, the increase in power and cooling required would boil the oceans and hasten our demise as a race.
But until SSDs achieve parity per gigabyte with the cost of spinning disk, we need to find ways to efficiently use what is still a relatively expensive resource and SSDs probably are not the best fit for your file-serving and bulk storage requirements. The venerable Anandtech actually demonstrates with their benchmarks that using SSDs for log-files may not gain you much. It’s actually an interesting if slightly flawed look at SSDs, SATA and SAS; it would have been more interesting if they’d done more work and gone more granular into the data-base tables.
They need to be used for appropriate workloads and ideally we need something like this from Compellent. Unfortunately for Compellent, I have horrid suspicion that this level of automated tiered storage will simply become another feature; you can’t keep a good idea down! Once we have block-level migration both automatic and rules-based we can work out quickly and easily how much SSD we need.
SSDs with automated tiering will save you money; probably both in terms of TCA and TCO. SSDs without automated tiering will save you money in terms of TCA for appropriate workloads but may end-up costing you in terms of TCO because the work needed to identify, balance and move data around.
Of course, it all comes down to the cost of the software needed to manage and automate the process. If the software is too expensive and the vendors simply try to milk it as a new cash-cow; we’ll not realise the savings of this brave new world.