So hopefully we all agree that EFDs have a place in the storage infrastructure of the future but we also have to ask ourselves what is this infrastructure is going to look like? If we look at some of the press releases and comments with regards to Fusion-IO, you would probably believe that the SAN was on the way out or actually shared storage in general would die.
Some of the figures are impressive; an un-named company believed that they were losing 15% of potential web-business potentially due to storage timeouts and the slowness of the response of the array.
That’s a huge amount of business to be loosing due to the slowness of the array but I wonder how true that is; was that really due to the end-to-end slowness of the system? Was it due to non-optimised SQL? I’ve seen SQL queries tuned down from 300 accesses to half a dozen with a couple of hours work. Did they blame the storage because they were the one team who couldn’t give a transparent view of their environment?
Often storage is a great diagnostic tool; just looking at the I/O profile can lead to interesting questions. If you see weird I/O ratios which step way out of the normal profile for an OLTP application it can be an indicator of sub-optimal code. But to do so, you need the tools which present the information in quick and easily digestable manner.
At the moment, it is all too easy to blame the storage because the management tools are not great and the estate becomes very opaque to the outside viewer. If we had the right tools, we could often become a crack team of dysfunctional diagnosticians like House and his team and people would come to us asking we know it’s not a storage problem but perhaps you can help us identify what is going on in our infrastructure.
That’d be a great step forward, don’t you think?