Pete Koehler of vmPete.com comments:
I love a good benchmark as much as the next guy. But success in the datacenter is not solely predicated on the results of a synthetic benchmark, especially those that do not reflect a real workload. This was the primary motivation in upgrading my production environment to FVP 2.0 as quickly as possible. After plenty of testing in the lab, I wanted to see how the new and improved features of FVP 2.0 impacted a production workload. The easiest way to do this is to sit back and watch, then share some screen shots.
All of the images below are from my production code compiling machines running at random points of the day. The workloads will always vary somewhat, so take them as more “observational differences” than benchmark results. Also note that these are much more than the typical busy VM. The code compiling VMs often hit the triple crown in the “difficult to design for” department.
Pete shows how much real-world workloads matter when testing storage. ioMeter alone can’t give you the truth of the matter. You need to test with real data in an environment as close to production as you can.
- Unmasking Bad Actors with Gigamon - December 7, 2017
- Dedicated Wireless Troubleshooting Doesn’t Have To Break The Bank - November 28, 2017
- Are Vendors and VARs The Enemy? - November 15, 2017
- Pluribus Networks Is the Definition of “Software Defined” - November 2, 2017
- First American Title And Viptela: A Story of Resilience - October 4, 2017
- Flexible NFV with Array Networks - August 7, 2017
- Cloud onRamp – Making The Cloud More Local For Your Users - July 26, 2017
- Feeling Your Flow with Plixer - July 19, 2017
- What Are ASICs? A Human Example - July 11, 2017
- Migrating to Healthcare Cloud Apps With Acadia and Viptela - June 5, 2017