Exclusives Riverbed Riverbed Hybrid Enterprise Tech Talks

Making The Cloud Feel Local

  1. Making The Cloud Feel Local
  2. Shadow IT as a Catalyst For Hybrid Enterprise
  3. Cloud Models for Hybrid Enterprise
  4. Hybrid Enterprise Is More Than Just Hybrid Cloud
  5. Networking Challenges to Hybrid Application Deployment
  6. Three Questions To Ask About Hybrid Enterprise
  7. Hybrid IT – Where To Go From Here
  8. Why Choose Hybrid Enterprise?

Recently, I had the opportunity to speak with Steve Riley, Technical Director in the Office of CTO at Riverbed Technology. Steve does a lot of forward thinking about technology, so it was fascinating listening to him opine about the direction of enterprise infrastructure. During the chat, Steve mentioned an area of technology and networking he’s been thinking a lot about lately, which is hybrid cloud architectures. He posed the question “How do we make the cloud feel local?”

On the surface, it seems like a straight-forward question that surely must have a clear technical answer. After stewing on his question for a while, I realized it’s deeper than it seems at first blush, and the “answers” are themselves some big challenges for the IT industry to tackle.

Before we approach the “how” of making the cloud feel local, perhaps we need to examine the “why”. What makes “local” inherently better for our applications? Is it better? After all, most of the in-house data centers I encounter, when compared to large, shared colocation facilities or data centers of cloud providers, lack many things. The reality is that for most enterprises, maintaining a data center is considered an overhead cost, rather than value-add. As such, the in-house data center usually isn’t as well equipped, staffed, or designed with the same level of redundancy. Neither is it usually held to the same procedural standards of a cloud-scale facility.

Indeed, “local” applications can often be associated with relatively low reliability, occasional data loss, and security hassles. This survey commissioned by Microsoft indicates that many SMBs who’ve gotten to the cloud realize it’s better than local. That’s not to say that we IT practitioners are all terrible at our jobs, but simply highlights the case that most enterprise IT environments are chronically understaffed, under-funded, and usually not equipped to deal with true catastrophes.

So why, then, do we like for our apps to feel “local”? I think a lot of it is psychological. “Local” is a code word for “fast,” or specifically for not feeling like the thing we’re trying to do is artificially slower than we think it should be. Funny thing is, how are we to tell whether a transaction takes the amount of time it does because the service is running in a cloud instance 4,000 miles away, or because it’s running on an on-premises, severely overworked server? It’s about setting and meeting expectations, and being able to prove that performance is consistent with predictions.

Additionally, humans as a species have been accustomed to manipulating and observing our world in a physical sense for, well, forever. We perceive that things we can reach out and touch, that are close to us, are better. We feel more in control of those things. Have you ever walked into the data center when the network, or application, or storage is down, just to stare at it — inspect the cabling and watch the blinking lights — while you consider possible causes or resolutions? I certainly have. But when our services run in a cloud, we feel (and are) physically disconnected from the infrastructure, which makes us feel insecure. Even when we are monitoring things electronically, the metrics to measure cloud performance, availability, or errors, are often hard to directly equate to the metrics or methods we use to monitor and troubleshoot those same things in local systems. Again, we tend to feel less in-control because we can’t watch a switch port for error counters or try shutting off other systems to see if that fixes the mysterious behavior we’re battling.

So, how do we make the cloud feel local? It’s easy. We just have to eliminate any performance degradation versus the arbitrary performance expectations of local systems, demonstrably prove that we’ve done so, and reprogram countless generations of human nature with regard to observing, diagnosing, and repairing broken things. Clearly, we have some work ahead of us.

Ed Note: Be sure to check out Riverbed’s take on the Hybrid Enterprise from Ginna Raahague (@GinnaRaahague).  Her post can be found on the Riverbed blog at  http://www.riverbed.com/blogs/Enterprise-Architecture-for-the-Hybrid-Enterprise.html.


BobMcCouchAbout The Author

Bob McCouch, CCIE  #38296, is a networking consultant in Pennsylvania, USA, with over 10 years of industry experience.  His blog can be found at http://HerdingPackets.net and followed on Twitter as @BobMcCouch.

Riverbed_logo-2This post is part of the Riverbed Hybrid Enterprise Sponsored Conversation series.  For more information on this topic, please see the rest of the series HERE.  To learn more about Riverbed’s Hybrid Enterprise Architecture, please visit  http://Riverbed.com.

About the author

Bob McCouch

4 Comments

  • Agreed Bob – feeling “local” is all about perception. And feeling disconnected from the infrastructure can be very unsettling. Making a service running in a cloud feel local will need to overcome that ‘disconnected’ feeling.

    • One of the challenges is figuring out which tasks or components of a workload need this “local” feeling, and the find the deployment model that makes sense. In some cases, a “follow-the-sun” approach may work best. In others, always-on global availability may be required — but this has its own challenges, especially with tyring to keep possibly multiple back-end databases in sync. A third option might be keeping as many resources as possible in one particular cloud region and “projecting” applications to wherever the users may be. Here, storage feels local but the actual source of truth is the original cloud region. Having multiple options to consider is a good thing — because you can allow the usage characteristics of the workload to help guide the right deployment choice, rather than the other way around.

      • I agree that it’s a big challenge, especially when most enterprises have limited resources devoted to IT architecture and each of the deployment models you mention can require distinct technologies and skill sets. Using the wrong model or trying to do all models when they’re not necessary will result in failed cloud initiatives.

        Are there any standard guidelines or decision trees that can help an enterprise make those decisions? Do transactional web-apps suggest one deployment type while fat-client database apps dictate another? How is an enterprise IT architect to make the best decisions in these new technologies and design options?

Leave a Comment