No matter if your applications run on-prem or in the cloud; no matter if it’s a microservice or monolith; no matter if it runs physical, virtual or containerized – customer satisfaction is key for businesses.
Digital Experience Creates Higher Demand on IT
With many organizations moving towards digital goods or services, and maintaining customer contact through digital channels, IT systems are paramount to the customer experience. The correlation between the applications IT runs and the customer has never been stronger.
While Service Level Agreements may be in place, it’s actually making sure the SLAs are met that is the more challenging problem to solve. A customer isn’t aware of high latency, an oversaturated network link, a failed host or a Distributed-Denial-of-Service attack.
Applications are becoming increasingly complex in nature, as features and functionalities expand continuously. No longer do applications run in isolation; more often than not, they are interconnected with 3rd party services for payment, messaging (SMS, e-mail) and other external tooling and services. This makes for a lot of noise over the local network, datacenter interconnects, WAN and internet.
Add in the layers of the software-defined datacenter, such as the hypervisor and network virtualization layer, and having full visibility into what’s running where, in what state and what’s causing any issues is not so simple anymore.
Running more business and customer experience through digital channels, more complicated applications due to added features and functionality, and a layered approach in the datacenter, all mean that triaging and finding the root case for any breach of the Service Level Agreement has become increasingly difficult.
Humans need technology to maintain visibility
It has even become so difficult, that humans, both individually and especially across teams, need to be augmented with technology to keep track of the complexity, correlating many objects in the mesh across datacenter and beyond that make up a single piece of business functionality with all kinds of observed behavior, performance, errors and status.
Humans need technology that creates order, interprets data and makes inferences about root causes and recommended actions to resolve them.
Previous generations of monitoring technology were aimed at a subset of the full stack, like server monitoring, or network monitoring. While correlating data across everything that makes up an application is not new, the scope at which this needs to be done constantly evolves with the vast supply of 3rd party services, public cloud vendors and their services.
This means that choosing the right technology is crucial for keeping the customer experience up. The wrong choice creates unexpected and unwanted blind spots. While choosing the right technology for the right here, right now is, in comparison, easy, choosing the right solution that extends into new application architectures, new computing environments and into the public cloud, is not.
Begin at the Foundation
But let’s begin at the foundation. For many organizations, this means starting out with a solution to gain visibility into where most of their current assets run: Their datacenter. Modern software-defined datacenters, like a VMware SDDC, luckily have the tools and pluggability to grab that telemetry data from the (virtualized) network and hypervisor layers and send it to the visibility solution for processing and correlation.
A great example of how simple this is with VMware NSX and vCenter. With just a few simple clicks, a solution like NETSCOUT’s vSTREAM NSX SVM (service virtual machine) can be added to the environment and start collecting data based on the extensive NSX policy framework. This gives administrators unparalleled levels of control over where to gain visibility first, without any overhead to the existing Virtual Machines, allowing seamless application visibility instantly.
And with VMware’s reach across the private datacenter, as well as in the public cloud with VMware Cloud on AWS, which includes NSX, this level of visibility can be ported over from the on-prem environment and across the public cloud. Having consistent metrics across all of these environments is a key enabler of migrating applications across the cloud, something we’ll dive into in a next blog post.
Dependencies matter
One key blind spot solutions like vSTREAM NSX SVM solve is one of the most sought-after features many IT admins miss in their day-to-day operations: applications dependency visualization and mapping. Analyzing and recognizing different types of data flows allows vSTREAM NSX SVM to determine what services, like a database, or components, like a webserver, are running in the environment that make up an application.
Knowing what type of data flows between components, and which virtual machine hosts what part of the application is crucial and the underlying dependencies is crucial for situational awareness and analysis of what is normal application behavior, and what is not.
Automatically determining cause and effect is a huge time saver and allows the right team to be involved in additional triage and mitigation more quickly, leading to a better Service Level and a happier customer.
Circling Back
And this, circling back to the opening of this blog post, is why visibility into IT systems is key for customer experience. The two are becoming intertwined at an increasing rate; having a good view of the ever more complex IT systems is the only way of keeping customers happy, keeping the IT systems afloat, and geared for transformation into the cloud.