As large organizations begin to look towards cloud computing, many find themselves questioning the suitability of the infrastructure for their business needs. As consumer-focused services like Carbonite lose data and startup-focused systems like Amazon EC2 and Microsoft Azure suffer outages, the image of the cloud has darkened. How are providers protecting the data? What RTO and RPO is offered? Are these sufficient for the types of applications being considered for the cloud?
Reuven Cohen of Enomaly has penned an Open Cloud Manifesto. This might not have been news but for a curious backlash when two big cloud vendors, Microsoft and Amazon, refused to sign on, although IBM, Sun, and many others have endorsed it. In my opinion, the Open Cloud Manifesto is interesting, forward-thinking, provocative, and a bit naive.
Virtualization is seen as the technology that makes it possible to do more with less, but there are many pitfalls to consider when virtualizing server infrastructure. This article suggests planning decisions to be considered that, if overlooked, could ruin the total cost of ownership (TCO) and the return on on investment (ROI) expected from this virtual infrastructure.
Although there is no word on a Fibre Channel over Ethernet (FCoE) initiator in the vein of their wildly successful iSCSI offering, Microsoft announced today that they are will be creating a logo program with test requirements for the new protocol. This certification program will likely follow the company’s similar work with iSCSI, Fibre Channel, and other products to ensure functionality and perhaps interoperability in the Windows market.
Along with many tidbits about storage advances in Windows Server 2008 and 2008 R2, this WinHEC presentation by Microsoft’s Suzanne Morgan demonstrated that the combination of the Windows iSCSI Initiator and NetApp FAS 3070 filer could saturate a 10 Gb Ethernet link. How many other storage arrays can do that?
The mainstream media is still digesting the Oscar awards, but we in storage had our own announcement this week: TechTarget’s (now non-PDF?) Storage magazine announced their Storage Products of the Year award for 2008. Without further ado, the awards and my reaction!
The third rail of enterprise IT is the shockingly opaque and flexible pricing schemes applied to hardware, software, and services. How much does a high-end switch or storage array cost? Are you getting ripped off on your maintenance contracts? Which bundled software modules are required and which are pure profit? You’ll get no help in answering these questions from mainstream sources like technical media, trade shows, or corporate blogs.
Iâ€™ve been talking about storage capacity utilization for my entire career, but the storage industry doesnâ€™t seem to be getting anywhere. Every year or so, a new study is performed showing that half of storage capacity in the data center is unused. And every time there is a predictable (and poorly thought through) â€œnetworked storage is a waste of timeâ€ response. The good news is that this is no longer a technical problem: Modern virtualized and networked servers ought to have decent utilization of storage capacity, and technology is improving all the time.