Syndicated

Innocence, Fairness, and Technology Benchmarks

HP recently commissioned Tolly Group to benchmark their BladeSystem c7000 against the Cisco UCS 5100. The short report focuses on two results, and reads like so many competitive benchmarks in the IT industry: Tolly focuses on metrics that highlight the strength of HP’s solution and the weaknesses of Cisco’s. I do not dispute the accuracy of these results, and HP and Tolly are doing exactly what tech companies do. But what’s the real value of pinpoint maximum-performance benchmarks like this?

0-100-0

Automotive media like Car and Driver and Top Gear frequently test the maximum performance of cars, racing to 100 mph or beyond, sliding around a skidpad, and slamming on the brakes. These tests can be enlightening when it comes to high-performance cars, and the punishing 0-100-0 test is especially impressive. But what’s the point of hammering an economy car or pickup truck like this? Maximal acceleration and cornering are entirely irrelevant to buyers of commuter cars and work vehicles.

Even though a given test can be conducted, it may not be enlightening. The Tolly report demonstrates two key findings:

  1. Although 4-blade configurations perform the same under maximum stress, Cisco UCS performance declines with 6 blades while HP’s remains steady.
  2. When using a shared blade uplink, Cisco UCS performance fell by half.

These are not startling results. Cisco blades sometimes need to share one I/O channel, and this can’t match the performance of an HP blade with dedicated I/O. Would it shock you to learn that a one-gallon bucket requires twice as many trips to the well as one that holds two gallons? Does it shock anyone to learn that a V6-powered Toyota RAV4 accelerates quicker than a four cylinder Honda CR-V? HP’s c7000 is bigger than Cisco’s UCS and offers more I/O channels, so HP beats Cisco whenever larger configurations with more I/O are tested.

Innocent Benchmarks

Greta examines the marks on an 18th century cooper’s bench

I’ll leave the deeper commentary on blade performance to experts like Kevin Houston and Martin Macleod, but these maximum-utilization benchmarks are only half the story. I’m much more interested in how the different approaches to I/O impact everyday (20%-40% load) performance and how oversubscription impacts performance as more blades are installed and workloads are moved around. In automotive terms, I’d like to know how well a car handles in the snow or how economical it is with three or four passengers. These real-world scenarios are much more telling than a test of a few blades under 100% load!

Clearly, HP wanted to call attention to specific shortcomings of a competitor’s product, and it was wise to do so with objective numbers instead of mudslinging and name-calling. I hope that future tests and releases include real-world workloads and logical configurations, not the extreme situation used in this report. The same lesson applies to all tech companies: Simple, objective tests of maximum performance are welcome, but customers need many more metrics!

Note: Along with 9 other independent bloggers, I attended HP’s Blades Tech Day in Houston on February 25 and 26. Most of my travel and living expenses were paid for by HP, and the company provided a small gift bag (pictured here).

About the author

Stephen Foskett

Stephen Foskett is an active participant in the world of enterprise information technology, currently focusing on enterprise storage, server virtualization, networking, and cloud computing. He organizes the popular Tech Field Day event series for Gestalt IT and runs Foskett Services. A long-time voice in the storage industry, Stephen has authored numerous articles for industry publications, and is a popular presenter at industry events. He can be found online at TechFieldDay.com, blog.FoskettS.net, and on Twitter at @SFoskett.

Leave a Comment