Western Digital is enabling the network attached hoarder in your life. They’ve beefed up their WD Red and Red Pro lines with up to 10TB per helium-filled drive. To hit this capacity, WD is using seven 1.42 TB platters per drive, up from six on last year’s capacity topping 8TB models.
Confused by all the conflated claims around the abilities of “Big Data”? Karen Lopez is here with a little explainer about what it means to use data in the age of cloud computing. More importantly, Karen doesn’t mince words. Click here to find out why “Big Data” isn’t a thing.
When Amazon announced they were opening an AWS region in Sweden, I asked where they were going to expand next. If you look at their map, there’s a continent shaped hole. Amazon didn’t take the hint, but Microsoft seems to be onboard. The company announced they will be opening up data centers in Cape Town and Johannesburg, starting in 2018.
When a category becomes settled, a bit of tedium begins to set in. Room for innovation rapidly shrinks, and becomes more about efficiency and refinement than redefinition. That’s kind of how I felt the hyperconverged infrastructure market was settling into. There are still marked differences in price, features, and capability between the players. But the literal configuration of hardware seemed to be homogenized.
Datrium is trying to change the expectations of hyperconvergence. Instead, they are billing their concept as Open Convergence. This is their response to the traditional issue with HCI. Their basic format is to separate bulk storage from compute, flash, and networking.
Quantum computing has advanced outside of being purely theoretical or the purview of science fiction. Several companies have specialized computes as their research projects or proof of concepts. IBM put up a publicly available quantum computer for testing with their IBM Q initiative. They’ve now expanded that from an available 5-qubit processor to 16-qubit. But it’s still the Wild West for the field.
For example, simply measuring performance gets surprisingly difficult. It’s easy to forget in classical computing with the bevy of benchmarks available, but even the language for performance on the quantum side isn’t agreed upon. Chris Lee at Ars Technica gives an in-depth look at what IBM is introducing as a measure of quantum computing performance: quantum volume.
In the last few months, I’ve had to name quite a few thing. I’ve named a child, a podcast, and a car (a Honda CR-V dubbed “Cool Runnings”). Coming up with a name can be very difficult. The name needs to simultaneously catchy, evocative, memorable, and unique. Add in a corporate setting with commitees and marketing getting involved, and it’s a wonder that anything gets named at all.
That being said, AMD has had a tough go of it with their new CPU naming conventions.
Want to use a supercomputer but don’t have a spare Scrooge McDuck vault of money available? Venerable supercomputer titan Cray is trying to do something about that, partnering with Markley to bring Supercomputing-as-a-Service to the masses. And by masses, I mean well funded organizations specializing in life sciences.
Can a framing metaphor be a product differentiator? In Turbonomic’s case, I think it can. They use a supply and demand model for their application assurance platform. This brings some interesting implications into the overall solution.
At Gestalt IT, we’re no stranger to the contentious confusion over “premise” vs “premises” (have you seen our podcast?). Dave Henry wrote up his thoughts on the mini-controversy. They’re well reasoned and I agree with his ultimate conclusion, there’s simply more precise verbiage available that makes the entire argument moot.
But I have to take some issue with Dave’s process up to that point.