Syndicated

Storage Shangri-La

Cloud Computing

I don’t know about you, but I’ve spent a lot of time reading about “Cloud Computing” lately. A lot of space has been devoted to the topic in the blogosphere, that’s for sure. Some people think it’s the “next big thing”, others say not on your life. But don’t worry; I’m not going to bore you with another prediction. Personally, I think that the truth lies somewhere in the middle. By the end of this year, or the beginning of next I think we will see some people adopting “Cloud Computing”, mostly in the SMB space. The enterprise customers will pretty much stick to their data centers, with a few exceptions for certain applications.

Ok, so now that I’ve bored you with a prediction after I said I wouldn’t, here’s why I did it. If I’m right, and enterprise customers do stick to their internal data centers it begs the question what are those data centers going to look like? How are these companies going to address the simultaneous issues of an uncertain economy, increasing demands on IT, and storage in particular, that confront them? For now, I’ll stick with the storage team, since I think that they have a particularly difficult task. Data volumes continue to grow, no matter what is happening with the economy. Maybe those volumes won’t grow quite as fast as they did when things were booming, but they will continue to grow. This means that the issues of increased capacity will continue to challenge the storage team. What will be new is that they will have to address those challenges with fewer dollars. As I indicated in my last blog, entitled Storage Efficiency, that means an ever more myopic focus on “storage efficiency” for most companies. But as I said, this can also present an opportunity for forward thinking leaders to implement changes in IT, and in storage in particular, that will provide not only long term cost savings, but also provide better service to the business.

Everyone into the pool!

So, what is my vision for the storage team that will do these amazing things? It’s just as simple as applying what seems to be working for the server team to storage, virtualization. Actually, it’s a bit more than that. It’s creating a pool of storage which can be managed as a single entity and delivered in different ways (NAS, SAN, FCoE, etc.), easily backed up, and protected with a proper DR solution. I realize that some of you reading this are saying “he’s talking about storage Shangri-La”! Well, maybe I am, but I think that it’s something that technology today might just allow me to do. It won’t necessarily come from a single vendor, but I think that it’s doable, but it means some changes to the way that organizations purchase storage, and the kind of storage that they purchase. It also means that some money will need to be expended in order to create that Shangri-La. It’s because of those expenditures that it’s going to take forward looking leadership. The fearful and the visionless need not apply.

If you are going to use heterogeneous storage (and I think you should at least be able to) in your storage pool, then you need some way to do things like SNAPs, Replication, and DR which is not vendor dependant. Personally, I think that the virtualization engine itself should provide those features, but you could use a third party tool to perform those functions as well. The key point here is that you separate these functions from the storage array so that you aren’t dependant on what’s available from a single storage vendor, or a single storage vendor’s array for this functionality. That is unless you pick a storage vendor who provides virtualization in the array itself as your virtualization engine. For example, if you use the Netapp V Series of virtualization engines, or the Hitachi USP or USP-VM to perform your virtualization. Those engines provide you with the ability to use the vendor’s tools for replication, etc. with many other vendors’ storage. They key is to find a virtualization engine which allows you to perform storage moves in a completely transparent manner to the hosts that consume that storage. This is important not only for reducing the impact of changes in your storage vendor, for example, but also when you want to re-tier you data. We often take data for certain applications which we consider borderline, and put it on tier-1 storage just to be safe. Now we can put that data on tier-2 storage (SATA), and if the performance turns out not to be what we need, we can move it to tier-1 without any disruption to the application. Thus saving CAPEX costs for the organization as well as OPEX costs.

This also means that there would be a change over time in the kind of storage I would buy. I would prefer to buy storage arrays that have little in the way of the kind of features that I describe above. Basically, just something that lets me configure different protection levels, and provides the storage out more than one port so that I can provide some high availability. All this should save on the per GB cost of the storage, and since I can use any vendor I want, my ability to negotiate price is enhanced. Again, more savings on CAPEX costs.

Storage Delivery

Once we have this pool of disk available, we need to make sure that we can deliver this storage in different ways. We need to make sure that the storage network is flexible enough to deliver the storage using iSCSI, Fiber Channel, and NAS (NFS or CIFS). Again, if you can get this from a single source, like Netapp, that’s one way to go. However, if you go a different route with your virtualization engine, then you need to make sure that your NAS engines are gateways, not appliances, so that you can deliver any vendors storage out of the pool. The same is true for any other storage consumers other than your applications hosts. For example, if you want to do backup to disk using something like a Data domain box, then, again, make sure that you are using their gateway so that you can utilize any kind of disk from the storage pool with your Data Domain solution.

Backup and DR

Finally, backups and DR need to be addressed. As I mentioned above, these services need to be available in the pool regardless of the mix storage vendors used. But at some point you may need to take things to tape, and that’s OK. Again, as long as the tape management system you use will play well with the virtual disk pool you have created. But more importantly, I recommend that daily backups be done to disk. The cost is within reason when you consider some of the deduplicating devices available today. This relegates tape to just an offsite (DR) role. You can even replicate some of these deduplicating devices, potentially eliminating tape entirely and saving yourself a lot of OPEX costs.

Wrap-up

So, I really believe that now is the time, under the guise of cost savings, to introduce things like storage virtualization, backup to disk, SNAPs, etc. If you have forward thinking leadership they will recognize that the ROI for the costs is reasonably short, and when it’s done, the ability of the storage team to manage more storage, provision storage more quickly, and reduce the cost of a managed GB of storage will be greatly enhanced going into the future. It will also position the storage team to handle the onslaught of storage growth that we are going to see once the economy turns around.

GestaltIT

I just want to mention that this blog is now being syndicated on http://gestaltit.com/. I want to say what an honor it is for me to be associated with GestaltIT. Stephen, and all the other authors are much better known and much smarter folks, so I’m hoping to be able to provide some content that doesn’t embarrass. Take a look if you get a chance, there some great stuff there.

–joerg

About the author

Joerg Hallbauer

I am a long time data center denizen who currently focuses on storage and storage related issues. I have worked on both sides of the fence, for vendors as well as being the guy who had to impliment what they sold me. Finally, I've managed teams of UNIX, Windows, and Storage Admins.

Leave a Comment