The recent Meltdown and Spectre attacks illustrate the problematic nature of modern computing systems. While the earlier Rowhammer attack could read or attack one process running in a virtual environment from another process running on the same processor, the Meltdown and Spectre attacks are of a completely different class, enabling a process to read large amounts of information from another process’ memory space. This is because inside each hosts, CPU makers have embraced a scale up approach. But what would happen with a more scale out approach to the architecture?
We’ve seen OS and chipmakers respond to the Meltdown and Spectre flaws with patches since the rampant speculation last week. What remains unanswered is how these patches will specifically impact performance on storage arrays and HCI systems. Chris Evans runs down what we do know, and some of the initial company responses.
On today’s show, each of our roundtable panelists chose what was the hot ticket item of 2017. Tune in to hear their arguments why 2017 was the year of SD-WAN, HCI, Net Neutrality, or Data Management!
Can an organization actualize the benefits of hyperconverged infrastructure without changing their IT methodology?
Sometimes a company’s code name for projects in development can give you some insight into how they view it. The one that always stick in my mind is “Revolution”, Nintendo’s code name for what ultimately became the Wii. It showed how different the console was than anything in the company’s past, and reflected the impact Nintendo expected of it.
In the same way, Cisco’s Project Starship has now been launched as Intersight. The name loses some geek factor, but is probably much better for IP. Much like the codename implies, this is a project that is clearly linked to how Cisco sees the future of their business. Cisco has been working on this for a while, and it’s a natural extension of their Unified Computing Systems that they’ve had for almost a decade.
Hyperconverged infrastructure has been around for a while. We’ve seen companies go public on the strength of the market, and companies get acquired for the same reason. It’s a way to simply the often complex world of provisioning and managing a virtualization infrastructure. But HCI has been around long enough that the limitations of that model have become clear to the enterprise. Any new entrant to the crowded market should have solutions to those problems.
Today, NetApp announced their entry into the HCI market. In their messaging, they hammered in on those limitations.
In this iteration of Gestalt Server News:
– Datrium makes its case for Open Convergence
– We find out what exactly is Big Data
– Disambiguating HCI and Hybrid Cloud
Plus, Cray is partnering to bring supercomputing as a service to the masses
At Tech Field Day, we heard from three different components of Dell EMC’s not inconsiderable family. The first was an update on VxRail, their hyperconverged infrastructure offering. I knew this was going to be a different type of presentation, because in the overview, they were upfront that they’d be going over what’s been working for the merged division, and where they were falling short. Most companies will be honest when asked about their shortcomings, but not every one will put it directly into their slide deck. It’s a frankness that I found refreshing.
It’s time for Gestalt News once again! This week in servers:
– DR Troopers: Quorum onQ 4.0
– AMD: The Last Decade
Plus Sysadmin Chatbots, The “Why” of HCI, API’s, privacy, and patent trolls!