Steve Duplessie is both right and wrong in his post on SSDs here!
He is right that simply sticking SSDs into an array and treating them as just Super Speedy Disk can cause yet more work and heartache! Concepts such as Tier 0 are just a nightmare to manage!
He is also right that the problem should be defined high-level as the interaction between the user and their data, getting them access to the data as quickly as possible.
He is also right that just fixing one part of the infrastructure and making one part faster does not fix the whole problem. It just moves the problem around!
Unfortunately, whilst every other component in the infrastructure has got faster and faster; arguably, storage is actually getting slower! At a SNIA Academy event recently, they suggested that if storage speeds had kept up with the rest of the infrastructure improvements; disks would now spin at 192,000 RPM. The ratio of capacity to IOPs gets less and less favourable every year; wide striping has helped mitigate the issue but as disks get bigger, we either look at the situation where we waste more and more capacity as the areal density of IOPs means that most of the capacity on a spindle should just be used for data at rest or we need a faster storage medium.
But we probably don’t need a huge amount of faster storage medium and a small sprinkling will go a long way; that’s why we need dynamic optimisation tools which move hot chunks of data about. SSDs will be good but just treating them as old-fashioned LUNs might not be the best use of them.
Automation is the answer but I think Steve knows that! Dynamic optimisation of infrastructure end-to-end is the Holy Grail; we are some way off that I suspect! I’d just settle for reliable and efficient automation tools for Storage Management at this point.