As large organizations begin to look towards cloud computing, many find themselves questioning the suitability of the infrastructure for their business needs. As consumer-focused services like Carbonite lose data and startup-focused systems like Amazon EC2 and Microsoft Azure suffer outages, the image of the cloud has darkened. How are providers protecting the data? What RTO and RPO is offered? Are these sufficient for the types of applications being considered for the cloud?
Cloud computing providers must address these issues, put the right systems in place, and then price their services properly if they are to succeed in the enterprise. All of the normal systems management disciplines must be included, whether it is by the public cloud provider or by the internal departments using it. The cloud being a by its nature an amorphous and fuzzy entity is going to put greater demand on some of these disciplines: Capacity planning and performance management, for example, become a degree more difficult with such an entity. The key challenge for cloud providers is not offering enterprise-grade governance but maintaining the pricing edge they have over in-house infrastructure.
This is a major limiting factor to the acceptance of public clouds in the enterprise space. Until a public cloud provider’s service includes customized SLAs that allow their customers to match things like RTO and RPO for their applications that are running there, enterprise applications will not use these services. It is likely that many of these issues will be resolved in the private cloud first, since they will be run by IT pros that are used to dealing with these issues and already have these governance systems in place.
Consider the impact of the extreme flexibility promised by cloud computing. If it is possible to quickly set up new instances of an application, there will be need to be a more rapid response to increase in capacity demands. The cloud could undo much of the good work which is currently being done in the balancing of virtual server performance, since the additional burst capacity required to meet point demands might not be of the sort required in the long term.
It is all very well for companies to demonstrate the capability to provision on demand from a public cloud, but what guarantees are there that external cloud providers will even have capacity to meet these demands? Peaks in some industries occur all at the same time: For example, consider the consumer retail space which will tend to peak around Christmas, end of tax-year peaks, company end of years, etc… External cloud providers may have to build in a very large amount of contingency capacity indeed!
If we borrow Nick Carr’s analogies, the cloud brings with it the risk of computing black-outs and brown-outs. What is the equivalent of a computing emergency generator?
- For Big Switch, Cloud Isn’t a Location – It’s a Design Principle - March 31, 2020
- The Definition of Secondary Storage is Evolving – As is Cohesity - March 24, 2020
- One Year Later: Questioning Cisco UCS - February 19, 2010
- EMC’s Unified Platform and Storage Tiering - September 24, 2009
- EMC Changes the Rules with Atmos Compute - August 19, 2009
- EMC Symmetrix V-Max: When Does It Get FAST and Virtual? - May 8, 2009
- Is Licensing Turning vSphere Into Vista? - April 27, 2009
- Governance And Peaks In The Cloud - March 31, 2009
- Planning for Virtual Infrastructure: Avoid the Pitfalls - March 14, 2009