Storage quality of service (QoS) is a very hot topic in today’s market. As performance of storage arrays increases rapidly, the need to ensure consistent performance becomes increasingly important. In multitenancy environments, the need to ensure a minimum level of service for customers is crucial to the survival of the business.
Why then is QoS so hard with storage? We’ve been able to solve the service problem in other areas before. What makes storage so special? Firstly, it’s because storage has always had one speed – fast. With traditional protocols and spinning media the only option was to make it go as quickly as possible. Consistent performance from rotational media has been difficult to predict due to the differences in things like seek times and cache hit rates. If you can’t reliably predict your performance, you can’t even begin to restrict it.
Flash disks have solved the predictability problem. With high-speed PCIe SSDs being integrated into servers and arrays we can now see the performance curve flatten at the high end. We can now be certain that a given read or write operation will return in a number of microseconds 99.999% of the time. Does that mean that flash has solved the need for QoS?
Flash is still built with the same speed idea in mind – as fast as possible. Now, with incredible IOPS available for a given storage device, it becomes entirely possible to saturate the uplinks connecting the PCIe device to the rest of the system. Instead of removing the need for QoS, flash has exposed the shortcomings that make it a necessity. As more and more PCIe flash devices are brought online, we need to find a way to ensure that performance can be restricted as needed and predicted to ensure efficient usage of the link between the storage device and the rest of the system.
What’s the answer? Software controls? Hardware restrictions? Bigger caches? Or something else entirely. Join Gestalt IT and Coho Data as we dive into the technology behind storage QoS and why it is so crucial to today’s high performance compute environments. Our tech talk will be held Wednesday, March 26 at 10 a.m. PST with Google Hangouts. Feel free to join us and listen to our special guests Howard Marks, Ray Lucchesi, Bob McCouch, and Andy Warfield as we discuss the difficulty of storage QoS and how to best address the needs of this growing focus of technology.
@HPStorageGuy Calvin Zito here. It boils down to two things that are hard to get right: software controls (but can’t be overly complex) and the hardware that can handle mix workload demands of today’s virtualized environments. Software controls alone are useless if you’re storage infrastructure can’t handle the load – and often when customers want QoS, they want multitenacy . The Register has a good summary of Gartner’s “Critical capabilities for general purpose midrange storage arrays” – doesn’t address QoS but I think it does hit some of the other things that are important. http://regmedia.co.uk/2014/03/25/gartner_cc_rating_900.jpg
If you want to read the full story that Chris Mellor wrote, find it here: http://www.theregister.co.uk/2014/03/25/gartner_midrange_array_juice/
I didn’t crash the party today because I’m sure you focused on the sponsor but I dare say that HP 3PAR has figured this out. We just announced an update to HP 3PAR Priority Optimization (QoS software) and I did a ChalkTalk that talks about what’s new: http://youtu.be/NwFI5bt-V00
I’ll have a blog post diving deeper soon as there’s a white paper that’s almost done that dives deep into Priority Optimization.