In my previous post in this Tech Talk series, I alluded to the difficulties of deploying a traditional n-tier application in a hybrid enterprise model. In the comments of that post, Steve Riley also mentioned that such deployments are rare, or rarely successful, in part due to latency. Now that I’ve written about the outward differences between local and hybrid-cloud infrastructures, and provided some examples of common deployments, I thought in this post I’d explore just what some of the challenges are to deploying “traditional” enterprise applications in a hybrid enterprise model and mention some tools and techniques for dealing with these challenges.
To start with, there are actually many challenges to successful hybrid IT deployment using clouds. Security, cost models, and management tools are just a few hurdles that need to be negotiated. But I’m looking to explore more fundamental issues related to hybrid application deployment. Essentially, there are two key issues at the network layer that make hybrid deployments challenging: bandwidth, and latency. These challenges are as old as data network itself, but are often forgotten pieces to the performance puzzle, and can present low-level road-blocks to successful hybrid application deployment. Let’s examine each.
Bandwidth: Applications, particularly custom-built enterprise apps, are often built assuming infinite bandwidth by developers working on local LAN segments. If WAN conditions are not simulated during development and testing, users may be in for quite a surprise when running a client/server app over a bandwidth-constrained Internet connection. I’ve seen this happen over and over with custom enterprise applications. Beyond client-server application bandwidth needs, the bandwidth requirements for maintaining synchronous database state or replicating rapidly-changing files between application instances can sometimes be surprising.
Latency: Network latency is the application’s silent killer. It comes in many flavors. Serialization delay is the amount of time it takes to get bits of data encoded onto physical links. It is closely tied to bandwidth. However, it is easily reduced by getting bigger pipes.
Propagation delay is the amount of time it takes for a packet to travel from source to destination. Due to the Internet’s indirect routing styles, its limiting factor is typically pure distance. Photons in an optical fiber travel at a relatively fixed speed and we really just can’t speed it up. This propagation delay seems small — often just a few tens of milliseconds. The problem is that for poorly architected apps this delay is often multiplied tens or hundreds of times over by constant back-and-forth of client requests and server responses, as illustrated below:
As you can see, each round-trip is just 80 ms. But five round-trips (often called “turns” in this type of analysis) adds up to 400 ms for a complete transaction to occur. Fifty round-trips would result in a 4-second transaction response, even if the server responded instantly to each request. In home-grown enterprise apps where network conditions were not considered I’ve seen individual transactions take hundreds or even thousands of turns. When this occurs, small changes in round-trip time can have drastic impact on application performance! Some developers and IT engineers think these are only “thick client” problems, but feature-rich web-based applications can be just as intensive in the number of resources they request to load a page or execute a transaction.
A final type of delay that can be seen in hybrid deployments is cryptographic delay when a VPN is used to connect the enterprise to the cloud provider. Encryption and decryption even in hardware firewalls can take a few milliseconds per packet which essentially adds to the host-to-host propagation delay.
These challenges of bandwidth and latency are not unique to cloud computing or hybrid IT deployments. However, they tend to be exacerbated by the nature of cloud deployments where server resources are “farther” away, at the other end of encrypted, bandwidth constrained, higher-latency paths. For in-house enterprise application deployments, many of these factors haven’t been carefully considered for many years.
Addressing the Challenges
Now that we’ve examined some of the low-level network-based issues that challenge any distributed application deployment, it’s time to discuss ways to address them.
Networking Technology: One way is through network technology that helps overcome the network bandwidth and delay issues. WAN acceleration such as Riverbed’s SteelHead CX line can help tremendously here by reducing bandwidth needs and the effects of latency through various WAN optimization techniques. This can help with client-server interaction of thick clients, and for data replication and database sync processes.
Latency can also be reduced with the help of IP Geolocation and global server load balancing to direct users or offices to the nearest cloud region to minimize propagation delay. WAN optimization and other networking technology solutions are sometimes the best — or even only — way to try to help legacy applications that were poorly architected for operation over WAN-type network paths.
Smarter Application Deployment: By considering application architecture when determining whether various components of an application should be collocated with one another, the most severe sources of compound latency can be reduced.
In this example, locating the application server and database together at the enterprise datacenter allows the many turns required to build an application-layer response to be done with minimal latency and maximum bandwidth. Meanwhile, web front-end servers in the cloud can rapidly scale horizontally to accommodate changing load, take advantage of IP geolocation to provide users with localized experience, etc. Tools such as Riverbed’s SteelCentral Application Performance Management suite can help identify application behaviors that can drive decisions in this area.
Smarter Application Development: One of the best ways to minimize the impact of bandwidth-constrained, higher-latency paths on application performance is for the application to be written in a way that makes minimal assumptions about available network resources. Developers of applications that may run over hybrid infrastructure shouldn’t do all their development on LANs that effectively offer infinite bandwidth and near-zero latency. DevOps-style testing of applications during development phases with WAN simulators can help application architects and developers build their applications to work better over wide area networks, which can make them work better in a hybrid deployment. Better yet, when looking at hybrid cloud deployment, developers should be developing with parts of the application infrastructure running in the cloud. In this case, it is important to consider the intended deployment architecture when doing this optimization. As illustrated above, depending on how the application will be deployed, optimization efforts could easily be misdirected or wasted by optimizing the wrong stage of the application for network-constrained operation.
This article covered some basic challenges that make hybrid application deployment between on-premises IT and cloud infrastructure difficult to execute successfully, and offered several methods to deal with these challenges and increase the probability of successful hybrid IT infrastructures. By keeping hybrid deployment models in mind, and collaborating between developers and IT operations, it can be possible to develop applications optimized for hybrid deployment.