Previously, I wrote about what I’d hoped to see and hear at the Symposium. Here’s an overview of the things I actually did uncover, but with a focus on what truly felt relevant to me. This posting is another sponsored post, with the disclosure that travel, lodging and food were covered by the organization who brought me to the conference.
The concept of Disaggregated HCI, to me, is maybe the most relevant piece of information. I’ve long been involved in the HyperConverged Infrastructure arena. The historical approach has been the “Appliance” model, in which quite typically, the storage has been internal to the servers themselves. Storage redundancies are created by spreading the data amongst all nodes, thereby leveraging the N+1 architecture as higher availability. The only problem that I’ve encountered with this approach has been that the scalability is affected. In order to add storage or CPU to the HCI environment, an additional cluster must be added. Should sizing be accurate from the get go, that’s not an issue, but the scalability is not incremental.
As a result, HPE has taken the design and turned it on its ear. The change has come in terms of the Storage. (Incidentally, I’ve always felt that the storage element is the most important piece of the equation). In order to make this change, a Nimble architecture has been added to the formula. This design no longer relies on the shared storage internal to the architecture, which again works quite well, but faces that scalability issue, now relies on the embedded dedupe, replication, compression and IO capacities of Nimble. Thus, the scalability issue is resolved. Adding CPU to the converged infra requires the purchase of additional HPE servers, and adding storage is the same as any architecture that uses Nimble as it’s storage. The key here is that the choice is now available to the customer to choose what makes sense for them. As far as functionality goes, the management interface is identical. The approach is seamless to the administrator, regardless of which method has been deployed. Federation is all still in place.
I also had my first glimpse into Primera. Feels quite a bit like next generation 3Par to me. Imagine all the goodness of 3Par but leveraging the NVMe protocols and lower NVMe latencies. While this is evolutionary, it really marks a moment for HPE. SSD is fast, but PCIe, by virtue of the lack of controller, and thus the lack of latencies related to the SAS/SATA controller, means that these latencies are diminished greatly. The Primera management software seems to have optimized the approach that has been a goal for HPE with 3Par from the get-go. Simplicity and efficiency are key. A lack of overlap makes all this achievable. The reliance on ASIC, but an efficiency of generating these ASICs become part of the equation. And, of course, the benefit of InfoSight giving a predictive analytic, and some movement toward self-healing with potential issues becomes really compelling.
I believe that the deeper integration with Infosight across the whole HPE portfolio is and will continue to be a hugely compelling piece of the equation. We did learn about a number of increased functionalities in InfoSight. I can speak to some, but others are not here yet. The sheer number of reference points collected, over such a large userbase is hugely influential on the capacities of InfoSight to provide data on your environment, and give you some ability to:
- Predict problems prior to their existing
- Mitigate issues before they occur
- Isolate root cause far more efficiently and rapidly should something take place
- Analyze capacity planning, and determine growth patterns
- Help to predict what future purchases need to be determined
- Assist on balancing load, and to build clusters at ideal sizes
And future plans will be really significant moving forward.
A couple of years ago, I wrote about Cloud Volumes, which to me was part of the magic that HPE acquired as they bought Nimble. The idea of “Cloud Proximate Storage” with the management of the architecture through the same interface as your on-premises Nimble federated storage is really compelling. Again, no egress charges, full replication, etc., as I discussed here is very compelling. The future use of this, the inclusion of container architecture, and the more dynamic nature of IO from the cloudy Nimble environment promise that this will remain a very cool approach. I still think it’s an amazing and beneficial “as needed” approach.
This year’s tech summit, storage symposium has been very cool. I’m once again grateful to have been able to attend and find myself very intrigued once again by the progress of HPE, particularly in the storage arena, and look forward to the future from this great innovative organization.
- HPE’s Technology Event & Symposium: What I Heard - November 7, 2019
- Pure Storage: How Do They Do This? - October 18, 2019
- HPE Storage at Nth Symposium 2019: What I Hope to See - October 16, 2019
- Pure Accelerate 2019: What I’m Hoping to See - September 6, 2019
- Pure’s ObjectEngine: Ensures Data Integrity and Accessibility - March 27, 2019
- Achieving a New Level for Data Storage and Cloud Fluidity - March 1, 2019
- Pure Storage Announces the “Data Hub” - October 1, 2018
- Pure Storage and VMworld US 2018: What I Expect - August 23, 2018
- VAAI, VASA, and VVols: It’s All About the APIs - August 22, 2018
- What Did We Learn from the Flash Memory Summit 2018? - August 22, 2018