Virtualization of server, network, and storage services illuminates the link between physical resources and functional applications. A running virtual machine can instantly move from one server, network adapter, HBA, or LUN to another. And when it happens, traditional components have no idea how to react.
The time has come to take sides on the core question of storage for virtual servers: Do you want storage intelligence to live in the hypervisor or the array? Most administrators are already lining up on one side or the other, unintentionally casting their vote while the rest flounder. But the storage industry must wake up and embrace the divide.
PCIe SSDs like Micron’s new P320h offer mindbending performance and enterprise class reliability. Although expensive, these devices are in an entirely different league from any other storage option. Micron promises to bring the PCIe P320h to market at nearly $15 per gigabyte, a substantial discount over other PCIe SSD competitors.
I’ve got a new video podcast up and running: Raising the Floor is a series of discussions about the future of enterprise IT. I kicked the series off talking about one of my favorite topics: Cloud storage. It was a pretty broad discussion, all packed into less than half an hour, but I wanted to share a few excerpts.
What happens in the telephone game is that a little bit of information gets lost at each step along the path, and at the end of the chain you’ve basically lost all the information. And this happens all the time in computers, especially in data storage. Thin reclamation is the core technical challenge to thin provisioning, and the telephone game is the reason.
Last week, after the Exec Event in Palo Alto, I joined my friend W. Curtis Preston for his first Backup Central Live! event. Curtis has spent years educating IT pros about data protection, this was the first week of a new series of self-produced events. And let me tell you, although I’ve seen him present dozens of times, Curtis was really in his element here. He held the packed room enthralled, and the vendor sponsors I talked to were very pleased about the event!
One of the biggest problems for thin provisioning is not the provisioning part: It’s fairly easy for a storage array to allocate on request: “I need a block; here’s some data I want you to write.” And the storage array just starts allocating, and allocating. But, the operating system never goes back and says “I don’t need that block anymore.”
Why do we care about thin provisioning? Because storage is not getting cheaper. If you went to buy a disk ten years ago, you’re going to spend about the same as would today, but you’re going to get a lot more capacity – a lot more capacity! The fact that we have terrible utilization of enterprise resources is really not helping us, and it’s not getting any better. It hasn’t improved because they are “doing storage” the same way.
One of the amusing aspects of being self-employed is watching all the giants battle it out. Every company is gunning for someone, but the amazing thing is that they rarely have each other in their sights: NetApp is gunning for EMC who’s more focused on HP who wants to knock off Oracle who’s fixated on IBM. It sounds very “high school romance” but this is deadly-serious business.
Remember what it was like to drive without a GPS? Sure, it’s possible, but a good GPS takes it to a whole new level. Need gas? A Denny’s Grand Slam? A detour around traffic? You’ve got it! And when the kids start asking “how much longer” you have a precise answer! Old-school server metrics are like the gauges in your car: They show what’s happening now and can be useful to the driver, but a lot of questions are left un-answered. This is where application performance monitoring comes in: Rather than just checking server stats, APM gives credible, actionable, and user-focused answers about the state of your systems.