Visualization is becoming increasingly important in enterprise IT. The rise of countless virtual machines, and even more innumerable containers, requires tool beyond just raw numbers. What perhaps was initially thought of as a convenience has quickly become a necessity. This is because while the capabilities of a given organizations architecture are often easily calculated, they are often unimaginable due to scale and human limitations. So effective visualization is a vital need.
But visualization requires choice, which often influences how the end result is viewed. A visualization is after all a metaphor for your data. If you choose to make the metaphor for “everything is running well” a big red octagon, you’ll probably end up confusing a lot more than helping. So simply visualizing isn’t enough.
Generally, the metaphorical currency used in products aren’t spelled out. They appeal to cultural norms (red/yellow/green signaling), or are simply reductive. But Turbonomic (the company formerly known as VMTurbo), wears its central framing metaphor on its sleeve.
Looking at Turbonomic’s dashboard, you might be fooled into thinking they offer an infrastructure monitoring solution. It’s replete with graphs, timelines, and diagrams that would not look unusual for such a product. That’s because it is doing some monitoring, begrudgingly. But that’s not their point. Instead their approach is much more economical. Their solution is built around supply and demand.
Let’s get into the specifics. Turbonomic’s solution is an application assurance platform. Their focus is constantly on how the overall system is going to impact application performance. Each application is defined by a desired state. It’s a conscious choice of phrase, one of which I much prefer over the ambiguity of “health”. Because the actual state of a given application will always fluctuate based on changing conditions, direct reactive human intervention is inadequate to maintain a given desired state. Instead, Turbonomic’s autonomous intervention works proactively to achieve this.
The metaphor for their automation is a virtual supply chain. In this are combined all the resources that could impact application performance. Both on-premises and available cloud resources can be taken into account in this process, allowing for a full on-premises to all cloud spectrum. This isn’t just a simple flow chart either, listing how each element is doing. Instead, it can dynamically be changed, either by an administrator or through automation.
To call back to the supply and demand metaphor, the virtual supply chain speaks for itself. The real innovation, at least in framing, is to quantify what demand is in enterprise infrastructure. This calls back to the concept of desired state, which firmly necessitates resources from the supply chain. Generally speaking, quantifying supply is easier to wrap your head around. But by framing application performance as an economic model, it forces you to very carefully define the demand side as well.
Clicking on any point in the supply chain gives you detailed information into each element, showing the number of threads in use, if you’re meeting an SLA, etc. From there, you can drill down into more historical information, and even show suggested action items by order of severity. Of course, if you really want to trust the system, you can never opt to see this. Instead, thanks to autonomous application assurance, the system can be setup to do remediation right away. I’m not sure how many organizations are going to opt for a completely hands off approach, at least right away. But from what I’ve seen, the system is robust enough to make this possible down the line. To be clear, I believe this to be more of a human trust issue, than any technical hurdle.
The automation available is also multifaceted. While the focus is always on application assurance, in terms of performance, there is some wiggle room within that criteria. Say for example your application performance is consistently fine. Using the policy tools, you can make the supply chain license and compliance aware. The implication is that you can have the system consolidate performance using fewer software licenses, while still meeting any compliance requirements. Since this can be extended to cloud resources, Turbonomic also has built in functions to show potential cost savings, based on the need for one time capital expenditures for on-premises resources or the monthly cost of bringing additional cloud infrastructure online.
That’s the benefit of Turbonomic’s focus on the application, it forces the entire solution to be holistic. Simple monitoring can get bogged down in being comprehensive, even for items that aren’t actionable. Instead, Turbonomic gathers only that which can be acted upon to effect the application. It gives it both a broader scope across the infrastructure and a narrower focus at the same time.
It’s an interesting idea that a framing metaphor can end up being a product differentiator. But I think the implementation of a supply and demand model on application performance is just that. It changes monitoring from a service to a byproduct of automation. It forces organizations to not just quantify available resources, but also desired outcomes. As more and more automation creeps into the data center, that may be the most important part of Turbonomic’s solution.
To see some demo time of Turbonomic, checkout their presentation from Tech Field Day.
[…] Turbonomic: Adam Smith and App Assurance […]