I’ve joked that if you’re ever confused by a new technology in IT but want to look like you know what you’re talking about, just say it looks like it has potential, but reference how someone tried the exact same thing in the mid-90s. More often than not, whoever you’re talking to will fill in the gaps as you laugh nervously at your own ignorance. Gina Rosenthal’s post on the history of virtualization and containers largely bares out this premise.
HP stumbled mightily in 2011, and it had nothing to do with product or people. Even sales remained strong, though the PC business is changing. HP’s mighty stumble was a crisis of confidence due to a chain of shenanigans at the very top. This culminated with the short reign of LÃ©o Apotheker, leaving HP to reassure the market of its strategy.
For a massive IT company, Dell sure doesn’t get the kind of respect given their competitors. Time and again, I’ll hear the sneers about Dell being little more than a â€œbox shifterâ€ who doesn’t â€œgetâ€ real enterprise IT needs. After a series of acquisitions in storage and networking, Dell is trying to stake a claim as a serious competitor to HP, IBM, Oracle, and the like. But why should anyone take Dell seriously, especially in enterprise storage?
The time has come to take sides on the core question of storage for virtual servers: Do you want storage intelligence to live in the hypervisor or the array? Most administrators are already lining up on one side or the other, unintentionally casting their vote while the rest flounder. But the storage industry must wake up and embrace the divide.
I’ve got a new video podcast up and running: Raising the Floor is a series of discussions about the future of enterprise IT. I kicked the series off talking about one of my favorite topics: Cloud storage. It was a pretty broad discussion, all packed into less than half an hour, but I wanted to share a few excerpts.
Why do we care about thin provisioning? Because storage is not getting cheaper. If you went to buy a disk ten years ago, you’re going to spend about the same as would today, but you’re going to get a lot more capacity – a lot more capacity! The fact that we have terrible utilization of enterprise resources is really not helping us, and it’s not getting any better. It hasn’t improved because they are “doing storage” the same way.
One of the amusing aspects of being self-employed is watching all the giants battle it out. Every company is gunning for someone, but the amazing thing is that they rarely have each other in their sights: NetApp is gunning for EMC who’s more focused on HP who wants to knock off Oracle who’s fixated on IBM. It sounds very “high school romance” but this is deadly-serious business.
I’m an IT revolutionary. I talk all the time about the quaint backwards “state of the art” in enterprise IT, what with its (many) decades old protocols, paradigms, and practices. What we call modern is really just a charade of faked-out old-fashioned open systems infrastructure: Pretend servers talking to fake disks over frankenstein networking technology.
Change is not a word normally associated with storage, and revolution is practically unheard of. Today’s modern enterprise storage systems and networks employ massive resources to do one simple thing: Emulate the basic hard disk drives used over three decades ago. But cracks are appearing in our mausoleum of fake disks: Application developers are discovering the value of object storage, and storage systems are appearing to support this need.
Overland Storage is showing intriguing signs of life. Once relegated to OEM tape library duty, Overland received an injection of cash and (more importantly) talent this year. Now the company is stepping up the technology behind their SnapServer NAS array by acquiring scale-out file storage company, MaxiScale. They intend to bring the scalable capacity and performance normally associated with enterprise and high-performance computing systems to the mass market.