In the server space, one of the biggest shifts was the form factor of the servers: From tower to rack-mount to blades. But what makes a blade server anyway? Let’s consider this for a moment, as we watch another shift in progress.
What elements remain unresolved to make FCoE truly world-class? What should the vendors be prioritizing?
The next version of Microsoft Windows Server includes integrated data deduplication technology. Microsoft is positioning this as a boon for server virtualization and claims it has very little performance impact. But how exactly does Microsoft’s de-duplication technology work?
What happens in the telephone game is that a little bit of information gets lost at each step along the path, and at the end of the chain you’ve basically lost all the information. And this happens all the time in computers, especially in data storage. Thin reclamation is the core technical challenge to thin provisioning, and the telephone game is the reason.
One of the amusing aspects of being self-employed is watching all the giants battle it out. Every company is gunning for someone, but the amazing thing is that they rarely have each other in their sights: NetApp is gunning for EMC who’s more focused on HP who wants to knock off Oracle who’s fixated on IBM. It sounds very “high school romance” but this is deadly-serious business.
Remember what it was like to drive without a GPS? Sure, it’s possible, but a good GPS takes it to a whole new level. Need gas? A Denny’s Grand Slam? A detour around traffic? You’ve got it! And when the kids start asking “how much longer” you have a precise answer! Old-school server metrics are like the gauges in your car: They show what’s happening now and can be useful to the driver, but a lot of questions are left un-answered. This is where application performance monitoring comes in: Rather than just checking server stats, APM gives credible, actionable, and user-focused answers about the state of your systems.
Now that the hype of “cloud everything” is subsiding, organizations are getting down to work deploying cloud storage to do actual useful tasks. The march from CAS to cloud to object storage has seen high-profile high-end flare-ups (think EMC Centera and Atmos) but the bulk of work is done by more pedestrian (think lower-cost) hardware and software. Through it all, Paul Carpentier has been at the forefront. Now his company, Caringo, is back in the news, delivering much-needed storage service features like multi-tenancy, named objects, dynamic caching, and web services.
Today, IBM alerted the world that they had not fallen asleep at the wheel by kicking out an awfully-impressive midrange storage array, the Storwize V7000. This seems like an excellent device, filled with proven engineering borrowed from the successful SAN Volume Controller (SVC) line of storage virtualization products. But closer examination (and IBM’s own Tony Pearson) reveal that it contains exactly nothing from their Storwize acquisition apart from the name.
Although many people are cynical about the whole idea of best practices, I’m a believer. I think that such beasts do exist, just that too many companies, analysts and especially consultants spend too much time applying the label to whatever works in their best interest at the time. To counteract this cesspool of non-best practices, I thought it best to put down a few ideas of my own. Following are four fundamental best practices I have distilled from almost 20 years in enterprise IT. I wonder if you agree with them.
Today is the (a?) day of reckoning in the 3Par saga, with Dell widely expected to make a counter-offer higher than HP’s bid. But this mega deal, like the Data Domain war before it, sends a strong signal to the enterprise IT world: It’s open season on data storage companies! But the rising superpowers are also likely looking at networking as an area of expansion. The game is afoot!