All Featured News Rundown

Pure Storage Takes Aim at Disk Storage | Gestalt IT Rundown: June 14, 2023

We’re at Pure Storage’s Pure Accelerate conference this week, and the buzz is focused on the company’s continuing quest to push hard disk drives out of the datacenter. The new FlashArray R4 products are faster, while the new “E family” of FlashBlade and FlashArray target capacity-focused applications like data protection and logging. Another announcement focuses on ransomware recovery, with Pure offering a temporary replacement array on demand. Let’s talk about these stories in more detail. This and more on this on-site Rundown at Pure Accelerate.


0:46 – GigaOm Evaluates Data Storage Security Infrastructure

Justin Warren recently authored a GigaOm Sonar report focused on data storage security posture (DSSP) for data security infrastructure. It highlights the challenges of ensuring data privacy, security, and integrity in modern information systems, particularly in the face of threats like ransomware and industrial espionage. The report explores how vendors are addressing these challenges and making data security easier to achieve. It emphasizes that leading vendors offer a layered set of options to address different aspects of data security, recognizing that each customer’s threat model is unique. The report also divides the DSSP landscape into three categories: primary storage systems, data protection systems, and data security infrastructure, and provides insights and coverage of vendors in each category.

Read More: GigaOm Sonar Report for Data Storage Security Posture (DSSP) for Data Security Infrastructure v1.0


2:29 – AMD Bergamo Packs 128 Zen 4C Cores

During AMD’s Data Center and AI Technology Premiere event, the company revealed new details about its 5nm EPYC Bergamo processors for cloud native applications. These processors, featuring 128 Zen 4C cores, are designed to compete with Intel’s Sierra Forest chips and Ampere’s AmpereOne processors. The EPYC Bergamo processors offer higher core counts than standard data center solutions while maximizing power efficiency for parallel and latency-tolerant workloads. AMD claims a 2.7X increase in energy efficiency with the Bergamo chips.

Read More: AMD Details EPYC Bergamo CPUs With 128 Zen 4C Cores, Available Now


6:30 – IOfortify Launched by VergeIO

VergeIO introduces IOfortify, a ransomware defense feature that offers attack detection and rapid recovery within seconds. The IOfortify software, integrated into VergeOS, monitors the deduplication process to detect ransomware attacks by identifying significant and detectable unique data writes caused by file encryption. When an attack is detected, immediate alerts are issued, allowing customers to take preventive action and activate rapid restoration services. VergeIO’s global inline deduplication technology enables the creation of space-efficient clones called “IOclones,” which can be used for quick recovery of virtual machines or entire virtual datacenters. IOfortify is available to VergeOS customers at no extra cost.

Read More: VergeIO launches IOfortify ransomware defense


9:11 – AMD Takes On NVIDIA with New AI Chip

AMD has announced its most-advanced GPU for artificial intelligence (AI), the MI300X, which will start shipping to select customers later this year. The introduction of AMD’s AI chip poses a strong challenge to Nvidia, which currently dominates the AI chip market. The MI300X offers features such as high memory capacity, enabling it to accommodate larger AI models, and AMD aims to tap into the growing AI accelerator market, which is expected to reach $150 billion by 2027.

Read More: AMD reveals new A.I. chip to challenge Nvidia’s dominance


13:11 – WASIX Enhances WebAssembly with POSIX Compatibility

The WASIX documentation provides a comprehensive resource for developers to understand and utilize the innovative extension to the WebAssembly System Interface (WASI). Developed by the Wasmer team, WASIX aims to enhance compatibility between WebAssembly (Wasm) and POSIX programs, enabling the seamless execution of complex applications in both browser and server environments. This documentation delves into the design philosophy, capabilities, and differences of WASIX compared to WASI, offering insights into its toolchain, runtime support, and a wide range of features, including multithreading, sockets, subprocess spawning, and TTY support. Whether you’re a seasoned developer or a software enthusiast, this documentation serves as a compass, guiding you to harness the power of WASIX in your applications and paving the way for the future of universal applications powered by WebAssembly.

Read More: WASI Documentation – What is WASI?


16:13 – TSMC’s CoWoS Capacity Mostly Consumed by NVIDIA and AMD

According to a report by DigiTimes, AMD and Nvidia are consuming a significant portion of TSMC’s chip on wafer on substrate (CoWoS) advanced packaging capacity. TSMC plans to expand its CoWoS capacity from 8,000 wafers per month to 11,000 by the end of 2021 and around 20,000 by the end of 2024. However, even with this expansion, it is expected that Nvidia will utilize approximately half of TSMC’s capacity. The growing demand for technologies like 5G, artificial intelligence (AI), and high-performance computing (HPC) is driving the need for complex multi-chiplet designs, leading to increased demand for advanced packaging solutions like CoWoS. TSMC is facing challenges in meeting the demand due to the limited capacity expansion and lead times for packaging equipment. AMD is also looking to secure additional CoWoS capacity for the next year.

Read More: AMD and Nvidia GPUs Consume Lion’s Share of TSMC’s CoWoS Capacity


19:51 – Pure Storage Takes Aim at Disk Storage

We’re at Pure Storage’s Pure Accelerate conference this week, and the buzz is focused on the company’s continuing quest to push hard disk drives out of the datacenter. The new FlashArray R4 products are faster, while the new “E family” of FlashBlade and FlashArray target capacity-focused applications like data protection and logging. Another announcement focuses on ransomware recovery, with Pure offering a temporary replacement array on demand. Let’s talk about these stories in more detail.

Read More: Pure Storage Newsroom – See the latest announcements, analyst reports, awards, and news coverage on Pure and the storage market.


23:37 – FlashArray//E Targets Bulk SAN

FlashArray//E is the latest addition to the “E family” from Pure Storage, following the release of FlashBlade//E. The E family targets high-capacity applications and aims to provide better and more cost-effective solutions compared to disk or hybrid SAN arrays. The controllers of FlashArray//E are related to the new FlashArray R4, utilizing similar QLC DFMs and components. Pure has been focusing on disk-related advancements for over a decade, with this announcement primarily emphasizing capacity expansion for applications such as data protection, healthcare imaging, logging, and storage of cold data.


25:49 – Evergreen//One Supports Ransomware Recovery

Evergreen//One now encompasses six concurrent service level agreements (SLAs) including uptime, buffer capacity, performance, zero planned downtime, energy efficiency, and now ransomware recovery. The new ransomware recovery SLA offers customers a clean loaner array for recovery in case of law enforcement seizing systems, with next-business-day availability in the US/EU. This service is provided as an add-on subscription with professional services and involves partnerships with Veeam, Cohesity, Commvault, and others. It is the first add-on to Evergreen//One and is designed as such to cater to varying customer needs and preferences.


29:11 – Pure’s FlashArray R4 Runs on Sapphire Rapids

R4 is the next-generation hardware platform for Pure Storage’s FlashArray//C and //X models, which are differentiated based performance and the type of flash used. The X model offers sub-millisecond response times with TLC, while the C model ranges from 2 to 4 milliseconds with QLC. The R4 platform is based on Intel’s latest 4th generation Xeon Scalable processors, featuring DDR5 memory and PCIe 4 interconnects.


33:21 – The Weeks Ahead

Pure Accelerate June 14-16, 2023

Security Field Day June 28-29, 2023

Edge Field Day July 12-13, 2023


Full Transcript

But first, Justin, you recently authored a GigaOM Sonar report focused on data storage security posture for data security infrastructure. You highlighted the challenges of security, privacy, integrity, and so on in modern storage systems. The report talks about how vendors are trying to address these challenges and also talks about specifics of different vendors and what they do.Tell us a little bit more.

Yeah, so these reports were trying to take a slightly different approach to the way we talk about data security It’s it tends to be very focused on either like it’s very infosec security related or it’s its infrastructure So what what I was trying to do is to encourage vendors and in fact customers as well to rethink how they how they deal with security of Specifically data center infrastructure. So there are a whole bunch of features and functions that exist inside various tools and software you can buy They all need to work together to give you a more holistic idea of how do we protect the data from? Multiple kinds of threats. So there’s lots of talk in various vendors and particularly in the media talking about ransomware But that’s not the only way that your data can become damaged or indeed be shared But with people who shouldn’t have access to it. So what we were trying to do is, okay How do you look at this as an infrastructure? How do you build security? infrastructure rather than leaving it sitting over here in the InfoSec field and ignoring it when you’re talking about data storage. So that was the general thrust of it and then we had to look at how all the different vendors were. We’re addressing that and how well do they do?

During AMD’s Data Center and AI Technology premier event, the company revealed new details about its five nanometer epic Bergamo processes for cloud native applications. Now these processes featuring about a 128 Zen 4C cores. They’re designed to compete with Intel Sierra Forest chips and Amperes Ampere 1 processes So the epic Bergamo processes offer higher core counts than standard data center solutions But maximizing power efficiency is also kind of important. So for parallel latency, Toril workloads So AMD is claiming a 2.7 times increase in energy efficiency with these Bergamo chips Yeah, it’s interesting This was a big announcement that a lot of us were really waiting for I closely watch the data center market and especially data center CPUs and storage and the Bergamo was an announcement that well frankly most of us saw coming most of us knew what was going to happen and we’re waiting for It to happen, but that doesn’t mean that it wasn’t surprising and in fact to me The biggest surprise here is that AMD and Intel have really taken two different directions with their High core count processors.

So just to be clear Additionally, processor makers basically had one kind of core and packed it onto the chip and that was that. ARM and then Intel released sort of a big little approach where they had high performance cores and lower performance but higher efficiency cores. So most of the latest Intel CPUs feature some mix of high performance cores and efficiency cores. The Xeons don’t, they’re just the high performance cores right now, but I think most people are expecting that Intel will have a massive efficiency core processor with tons and tons of processor complexes and even though each of them is smaller, they can fit a whole bunch more on a die because they’re smaller, you know, they can cram more in. This is the same thing that Apple’s doing, the same thing you find in ARM CPUs. Everybody I think expected that maybe that’s what AMD would do as well, but they didn’t. Instead, Zen 4C is basically the same as Zen 4 with some of the features kind of crammed in together to increase efficiency in terms of packaging to make the processor smaller. And in order to achieve this, they basically, you know, move this around and move that around and change the tolerances on this and made these things smaller and scrunched it up a little bit. And so they end up with a processor core that is roughly the same on a clock for clock basis as the regular Zen 4, but takes up less real estate. And in this case, they were able to put 128 of them on a single chip package. And that’s pretty cool because essentially you’ve got a processor that supports all the features and functions of the rest of the Zen 4 family, but crams a lot more CPUs into the same package. So that’s neat. I’m not sure that we should put too much stock into the specific benchmark numbers in terms of 2.7 times increase in energy efficiency under what workloads, under what circumstances, that sort of thing. But I think that it is really interesting to see these two very powerful companies, Intel and AMD, both trying to come out with high core count processors for effectively for cloud scale applications. AMD has already announced wins in that as well, where they’re going to be having Bergamo powered cloud instances in public hyperscaler clouds. And I think that that’s great for AMD. I think it’s great for consumers. I mean, you get more cores, cheaper, that’s always better. And it’s really cool for us to watch technology to see how things are evolving in different ways from these two big powerful companies.

Justin, Verge.io, who we’ve spoken about previously on the rundown, just introduced what they call IO Fortify, which is a ransomware defense feature that offers attack detection and rapid recovery within seconds. It’s integrated into the Verge.io hyper-converged platform, and it monitors the deduplication to detect ransomware, which is something, I think, that we’ve heard from some other companies, including here this week. What do you think of this type of product and what do you think of the Verge.io approach?

Well, it’s not unique to Verge. io, which is both a good thing and not so good. There’s a few other companies that do something similar. Basically, what you have is when ransomware happens, it’s encrypting the data. So when you encrypt things, it becomes much more randomized, which means it’s harder to dedupe. That means that if you’ve got dedupe in your storage infrastructure, you can detect that, oh, this data is actually harder to dedupe now. So that means that there’s probably something weird going on. And that’s a pretty high confidence signal that we should do something about it. The thing is ransomware isn’t the only thing that can make your dedute ratios change. So it’s not guaranteed that it definitely is ransomware. So you don’t want it to auto recover. You want it to then trigger further action and say, okay, is this ransomware or not? We should check it out. Generally, the approach from the vendors is that when we see a high quality signal that we think could be ransomware, It turns on like immediately take a snapshot so that you have a recovery point and ideally if you’ve got, depending on how you do it, if they have continuous data protection, they’ll flag, okay, we noticed this happened, it started happening at this point, we’ll flag the start point so that you’ve got an easy place to come and find the recovery. We heard some things from Pure yesterday where they’re basically doing some similar features. There’s a whole bunch of other vendors who do this. It’s useful, the thing is that being able recover quickly from ransomware is great if you get hit. You want to try not to get hit though and it’s also not the only way that data can get damaged and we covered that in the the GigaM Sonar report which is there’s a whole bunch of other things that can happen to your data and one of them even with ransomware is I can ransom you without encrypting the data. I can take a copy and then I can ransom you. I can say well if you don’t give me money I will release the data and then the regulators will know about it or your customers will know about it and so you want to avoid that kind of public damage, you can’t protect that with snapshots. So this sort of thing is useful for one specific use case. Great, but you have to look broader than that. You can’t just say, “Oh, we have this feature function and now the problem is solved. “

More announcements in the chip world. AMD has announced its most advanced GPU for artificial intelligence. It’s called the MI300X which will start shipping to select customers later this year. The introduction of AMD’s AI chip poses a strong challenge to Nvidia which currently dominates the AI chip market quite famously so. The MI300X has got some neat features like high memory capacity so it can accommodate some larger AI models. You need a lot of memory to be able to put big models in with lots of features and AMD is basically trying to tap into a really big AI accelerator market. It’s expected to reach about 150 billion by 2027. There’s a huge amount of hype about it at the moment. Everyone is piling into this market to try and make as much money as possible.

It’s an interesting space, isn’t it? We’ve got NVIDIA, which is obviously doing very, very well in machine learning training and is powering a lot of machine learning in the clouds, machine learning supercomputers, that sort of thing. AMD, of course, wants a piece of that. So does Intel, so do a lot of companies. Cerebras, for example. And it’s each company, again, is taking their own approach to how to break into this market. NVIDIA’s approach is basically relying on people loving their processors, loving CUDA, loving working with NVIDIA processors and having the NVIDIA GPUs be among the strongest on the market and among the earliest on the market to come out with newer features and better performance. So NVIDIA is in a really strong position. However, the massive growth in interest in artificial intelligence and machine learning models and so on has driven a lot of demand for this. And so it’s no surprise that AMD, which is of course the other big GPU company, would be able to leverage their technology both on the GPU and the CPU side to come out with a machine learning accelerator that can compete with NVIDIA. And that’s just what they did with the MI300. Now the 300X has some cool features as you mentioned. Essentially this is an all GPU offering. Previously they had kind of mixed and mashed a little bit. It also, they’re mixing in some HBM, a high bandwidth memory, in order to create a bit of a, I don’t know, a turbocharged AI processor. The interesting thing to me though is not that AMD has come out with a competitive chip because frankly they’re great and of course they did. It’s the way that they’re going to market with this thing. AMD realizes that in order to break into the market, they need to have customers adopt it. And so AMD is working with companies like Hugging Face, which has gotta be one of the more fun names for a company in the world. That is actually the name of a real company if you’re not aware of them. In fact, it’s a powerhouse in the AI space. And so AMD is working with them to make sure that their models run really, really well on the AMD processors. Similarly, Intel has announced that they’re gonna build, I don’t think they’ve announced it yet, but well, Intel was rumored to be building a massive AI supercomputer in the cloud for people to use their Gaudi Habana processors. Nvidia is trying to see theirs everywhere as well. It’s a big race and it’ll be interesting to see how this all works out. Now this last bit about the dollars I think is interesting too because previously we had looked at the AI market as growing to, you know, 40, $50 billion in a few years. AMD is now saying $150 billion and in a couple more years after that, that’s big. I don’t know if I believe that the market is $150 billion market, but I guess we’ll see and it’s pretty interesting to think that one of the major providers thinks that it that could reach that kind of number.

Justin, we’ve been talking about WebAssembly quite a lot and either you’ve been quite interested in it as well. And those of us who’ve been in computing a long time know about POSIX, which is the standard Unix interface. You were telling us about this project called WOSIX, which provides basically a WebAssembly enhanced with POSIX somehow. Can you tell us a little bit more about WOSIX?

Sure, so WOSIX is an effort by a particular startup. It looks like they’re trying to push forward with the WebAssembly systems interface. So I won’t go too deep in the weeds on it, but the systems interface is really important part of the WebAssembly outside of the browser. It’s what will make WebAssembly really, really cool. You need to have a way for different components to talk to each other. So different programs need to be able to talk over, say, over the internet, you need to have sockets, you need to have threads and things like that. That’s what’s coming with the official standard for WebAssembly systems interface. Wasix is an effort to, okay, we need some POSIX interface things, we’ll build one ourselves so that you can write programs that can, for example, use threads that they can talk over a network socket. That’s not possible with the WebAssembly standards today, which means that you can’t write really interesting and useful programs. You’re kind of limited to things that basically just use functions. It’s a little bit more limited. So what they’re trying to do is say, well, we want to be able to write really useful programs. We want to do it now. So rather than waiting for the standard to settle, we’ll push this out and then hopefully get people interested in writing programs in the WebAssembly ecosystem while we wait for the standardization to happen. I actually think that maybe the startup is trying to push their approach to things and maybe get that into the standard, because that provides that particular startup company with a certain advantage. if you can get everyone else to agree that, oh yeah, our thing is totally the standard now, you’ve already got a head start. I don’t know that that’s actually gonna happen. And there’s some limitations to the way this Wazix thing works that think, I think it might actually be a bit of a dead end, we’ll see, but it is needed to actually push the standards process, I think. It is running fairly slow, and we really, really do need this Wazie standard thing to come out so that we can have all of the features and functions that we need to get the component model for WebAssembly. And that’s where you’ll be able to have programs that you write in any language. As long as you write to the WASI standard, I can write a module in Python, and then someone else can use that as a library in a program written in Rust, which is the promise of what Java could do, like write once, run anywhere. Only now I don’t have to write it in Java. I can write it in my own favorite language. I just target it at WebAssembly. It compiles to WebAssembly, and the WASI standard means that you can plug all these components together and get programs that run anywhere in any language you like, which would be extremely cool if we can pull it off. Big if.

Now according to a report by Digitimes, AMD and Nvidia are consuming a significant portion of TSMC’s chip-on wafer on substrate, so co-wars, I think is how we pronounce it, their advanced packaging capacity. So TSMC plans to expand their co-wars capacity from about 8,000 wafers a month to 11,000 by the end of the year, and around 20,000 by 2024. So even with this expansion, it’s expected that like Nvidia is gonna utilize approximately half of TSMC’s capacity, which is just an enormous amount. The growing demand for these technologies like 5G, all the AI/ML things, high performance computing, this is all driving the need for this really complex multi-chiplet designs, which TSMC is really great at. And that’s leading to increased demand for this advanced packaging stuff like Cobalt. So TSMC is facing challenges in meeting this demand just because they don’t have the capacity to build it fast enough. And so they’re trying to expand, they build more stuff and the lead times for this packaging is also a bit of a challenge. AMD has, sorry, AMD has pretty similar issues and they’re looking to secure additional capacity for next year as well.

Yeah, it’s an interesting situation because as we’ve embraced, we as an industry, I mean, not me, I haven’t built any of these things, but as we as an industry have embraced this Chiplet concept, I think that it has a lot of promise. I mean, the idea of Chiplets, the idea of, there’s a standard for interconnecting these things, UCIE that Intel is promoting, AMD and Nvidia are both using Chiplets as well. many companies are looking at this sort of thing. Apple with their M series are looking at, you know, chiplets on wafers. And essentially what we have is we have, yeah, little chip components. And instead of having that be the processor, we essentially solder them to another silicon substrate that allows these chiplets to be combined. And that was one of the superpowers that AMD utilized to make the Ryzen processors so great. It’s one of the cool things that Nvidia has been doing, as you mentioned, Intel’s leaning into it as well. The problem is that this just exacerbates the supply constraints and the manufacturing constraints that we’ve already been facing, because essentially these companies need now, not just to produce chips, but also to produce the substrates that those chips go on. And not only that, but they need to then have the capacity to take the chiplets and manufacture them on the substrates in a way that is reliable and repeatable and you know basically create the product that they’re trying to create. And we’ve heard that the yields on some of these chip on wafer on substrate chips has been fairly poor and this is another thing that kind of cuts into the supply of processors. So TSMC is working very very hard to improve and increase capacity And it’s very likely that they’ll be able to have this all rolled out fairly soon, but even if they do, they might not be able to meet the demand from the rest of these customers, even if they expand capacity. They’ll have to expand more and other companies will have to get into this market as well.

Justin, we are in Las Vegas with Pure Storage at the Pure Accelerate conference this week, and the buzz is focused on the company’s continuing quest to push hard disk drives out of the data center. The new FlashArray R4 products are faster. The new E-Family of FlashBlade and FlashArray are higher capacity and lower cost. And there’s another new announcement here focused on ransomware recovery, which I know you found interesting. Pure is really trying to lean into this topic, which is basically the same messaging we’ve been hearing from them for 10 years. What’s your take overall on Pure Accelerate?

Yeah, so Pure, I mean it’s great to actually be back in person. I haven’t been at one of these for several years, but it’s nice to actually be in the room and hear from people directly about what the plan is. There’s a bit of a shift in how they’re talking about what Pure is doing. For a while it’s sort of got a little bit, I suppose a bit muddy, now that they’re looking at being a bit more direct about our purpose in life is to get rid of hard drives, which is really clear. I do actually like that that they’re not afraid to say we’re a storage company. It’s in the name and now the way Flash is working it’s basically inevitable that the hard drive is going to go away. They’re going to really push that message a lot harder and themselves to say, look, we’ve been a flash company since founding. That’s the whole reason this company exists. And they now have a pretty full portfolio of options. And that was something that came up in the briefings yesterday was until now with the FlashArray, the E series, there wasn’t really a whole solution from Pure. You could buy Pure Storage and do a whole bunch of stuff with it. But if you had capacity workloads, you pretty much had to go and buy from somebody else because Pure didn’t really have something that would do that. And now the way that they’ve got capacity with the flash, you can buy everything from Pure, which means that they can start doing things of go into a competitor and sweep the floor. So if you wanna get rid of an old hard drive system, you can just buy brand new stuff from Pure. Whatever the workload is that you need, you can find something in the Pure portfolio that will work for you. So I think we’re gonna see a much clearer message from that, it’s going, you can buy Pure, you can do whatever you like on it, and they’re gonna get a lot more pointy with that. for everyone who is trying to get rid of hard drives?

To me, it reminds me of the electric car market. In that pure– essentially, by building these all-flash arrays, Pure has a product that is obviously better than the incumbent hard disk-based systems, mainly in terms of performance and efficiency, just like electric cars. And it’s one of those things, I think, that most people thought, these are exotic systems. These are expensive systems. I can’t possibly get a electric car, all flash storage system that replaces my regular, everyday normal use case. We are now in the same point with both of these technologies where companies are offering products that absolutely can compete on price and absolutely wipe up the floor on performance and efficiency. And yet, we’re still, you know, there’s still a huge market for these legacy products, hard disk based storage arrays, hybrid arrays, that sort of thing. But that being said, I think that the announcements that we’ve seen this week from Pure are important mainly because it shows that Pure is absolutely interested in going after pretty much everything in the data center. As you said, you know, making sure that there’s not really corner of the enterprise that can’t be running on an all-flash Pure array. So I want to take a look at some of these announcements specifically.

So first off, I want to talk about the E-family. So a few months ago, Pure introduced the FlashBlade E, which is basically a QLC, lower performance, capacity and really less expensive version of the FlashBlade. The FlashArray follows that and is the same kind of thing except FlashArray. So essentially the FlashBlade is a scale out system that fundamentally is an object store but also supports NAS protocols. The FlashArray is fundamentally a SAN scale up storage system, but of course it’s unified. It supports NAS protocols as well. It doesn’t support object, but essentially we’ve got a scale up that targets mainstream workloads in the data center. We’ve got a scale out that targets object and file unstructured data workloads. And the E-Series is lower cost, higher capacity, and targeted at secondary applications, cold storage, bulk storage, basically the things that previously Flash couldn’t go into. The E-Series Flash Array is basically one to four petabyte range, which sounds fairly large, but really isn’t for a lot of these bulk storage applications. And the FlashBlade E-series is kind of four plus petabytes. And so you could start thinking about that maybe as like a backup target or something to hold, older medical imaging or AI data that you’re no longer processing, but you might wanna go back to that sort of thing. So again, this all goes to Pure’s strategy of targeting every use case for spinning hard drives, which in this case is basically lower performance, slower speed bulk hard drives with an all flash system that comes in at a competitive price. So that’s the flash array and the flash blade E.

There’s also an announcement by Pure about Evergreen One. So they’ve expanded that. It did sort of have five SLAs in it. They’ve added another one, which is around ransomware, which is kind of interesting. It’s an optional SLA, or like it’s a paid add-on. And the idea is that as well as having all of the SLAs around uptime and how much performance you get, This is around recovery from ransomware. And the thing that I really thought was interesting is that they pulled out one of the use cases that you don’t really understand about when you’re recovering from ransomware. Sometimes, and particularly when it’s important data, you can’t actually just recover straight over the original array because you need it for forensics. So you have to kind of freeze the array so that you can look at it. And if you’ve got law enforcement involved, they wanna be able to see it to gather evidence to be able to go and do something with that. which means that your array can’t be recovered and put back into production. So while that’s happening, which can take weeks, sometimes months, what do you do? So the idea with this is that if you subscribe to this SLA, so you can get another array from Pure as a loaner, and I think it was 180 days worth, they will just give you an array that you can then recover to. And because of the way Pure structures all of its stuff, the heads don’t matter, it’s all about the data. So if you recover the data to that array, it’s basically the original array. So you can have an array that you put back into production and you can keep running on while you keep one for forensic reasons and do all of the ransomware recovery type analytics and what went wrong, how did this happen, what do we need to fix? And then after that’s all finished, you can give the loaner array back and you go back into production with the way things would normally work. I think that’s really interesting. One caveat on it is that you have to take out the SLA subscription on your entire fleet, which kind of makes sense because you don’t know what’s gonna get hit by ransomware and Pure needs to understand what sort of capacity it needs to have, what arrays do we have need to keep in stock that we can ship out to customers. But it does add an extra way of recovering that keeps customers happy. It makes the customer experience easier when they’re basically having the worst day. This just takes one headache away, which I think for particularly for a lot of enterprise customers, that’s actually gonna be really valuable just to have it there as kind of an insurance policy. Having that work as an SLA, I think, is also quite interesting because it’s pushing pure to say, look, this is a commitment we’re making to you, so if we don’t live up to this, there are extra benefits you can get back from us. So I’m not exactly sure what happened. Always ask, if you violate the SLA, what do I win? So definitely dig into those details and what would be the impact if they don’t live up to that SLA. But it’s mostly a signal to the market saying, we are this committed to this and we will live up to this promise, which we’ve added onto the rest of the promise of Evergreen One, which is quite a large thing of saying, this should operate more like a service and less like a product that if it breaks, well, tough.

Yeah, I really think that it’s an interesting play for Pure and it really leans into what they’re good at, which is not surprisingly, in my opinion, just flash. It is the whole customer experience, whole experience of owning and using and living with these storage systems. So it makes a lot of sense. I also have to point out that Pure has revved the Flash Array C and X models. The new R4 version looks to me like it’s running fourth generation Xeon from Intel. It says Intel on the lanyards and they run DDR5 and support PCIe 4. So I’m guessing those are Sapphire Rapids chips. Now it’s not the kind of thing that people really kick the tires and look at the specs on. I don’t think that that storage array buyer should be really too worried about which exact processor is running on the controllers. But what they should be interested in is the fact that there’s been a big jump in performance for these systems, thanks to these enhancements. So the FlashArray C and X are basically pure’s bread and butter. These are the systems that are out there all over the place, and the new controllers are just a lot faster. They support new features. You know, there’s also the XL, which is the really big FlashArray. There’s the E, which I mentioned before. But these guys sit in the middle, And it’s just nice to see them upgraded. One of the things, too, of course, that’s nice is that the pure evergreen model means that companies can upgrade a previous C with the new controllers to get the newest C and more performance. And as you said, the whole personality of the array just stays the same. You just swap out the controllers, and suddenly it’s like you’ve got a new car with a new engine in it. And that’s pretty cool. There’s also approaches that allow people to move between arrays but that’s a little bit more challenging and frankly most people I think in production are just going to want to continue to upgrade the system they already have.

We heard a story from Coz, one of the founders of Pure Storage, about one of their original customers who bought the original pre-release flash array way back 12 years ago I guess and is still running it in production and has replaced literally every component of it, but as far as it is logically concerned, it’s still the same array that they bought 12 years ago. And I think that shows, again, the fact that Pure is really fundamentally about that ownership experience more than it is about any specifics of the technology involved. And that whole upgrade process was non-disruptive. And that thing, I think, that is kind of amazing that you can swap out all of the different components, real chip of thesia stuff, and do that for 12 years with non-disruptive upgrades. So the storage is basically immortal. That aspect I think is really, really cool.

One other aspect I noticed was Pure was talking about maybe pushing some capabilities that exist in Pure One. The software as a service thing that every array plugs into so you can look at analytics and understand what’s happening with your whole fleet. They’re starting to look at how do we push certain capabilities that would exist in the cloud only because there’s some challenges around geopolitics about where is data and should we actually be shipping this to different countries so how can we push some of these capability into the array so the fact that we’ve now got better processes and more capability in the in the arrays if there’s extra overhead there that’s not being used for performance that’s available to be used for other bits and pieces that would otherwise sit outside the array that’s going to be an interesting thing about how are they going to start pushing some of that capability to run in the arrays themselves.

Yeah, and I think that it certainly is an interesting situation. And again, it all comes back to the fact that unlike so many other companies that are moving to a software on commodity hardware approach, Pure is leaning into a complete integrated solution all the way down to the chips, the NAND chips that they’re spec-ing and purchasing all the way up to the customer experience and upgrade process. It is pretty much an end-to-end solution, and that gives them some capabilities that some of the other storage companies just don’t have.

Well thank you so much for joining us today Justin on the rundown. It’s always great to have a guest. It’s always great to record on premises and to see you in person here.

It’s been a while.

This week we’re at Pure Accelerate as mentioned and you’ll see some coverage from us at Gestalt IT.com as well as probably the rest of your favorite news sites from Pure Accelerate. Soon we’ve got Security Field Day coming up. I recommend checking out TechFieldDay.com and learning more about the Security Field Day presentations and so on. I’ll also point out that the videos from Cloud Field Day and Cisco Live are now posted, So if you missed those and if you’re interested in any of the products and companies covered as well as of course our roundtable discussions and podcasts with the delegates, please do check out the Tech Field Day YouTube channel.

Thanks for joining us for the Gestalt IT Rundown. You can catch new episodes every Wednesday as a YouTube video or you can find us in your favorite podcast application. We’ll be back next Wednesday to talk about all of the IT news of the week that was, but But until then, for myself, for Justin, for Tom Hollingsworth, and all of us here in the Gestalt IT family, thanks for joining us. And here’s wishing you and yours a great day.

The Gestalt IT Rundown is a weekly look at the IT news of the week. You can catch it as a YouTube video and on your favorite podcast application every Wednesday. Be sure to subscribe to Gestalt IT on YouTube for even more weekly video content.

About the author

Stephen Foskett

Stephen Foskett is an active participant in the world of enterprise information technology, currently focusing on enterprise storage, server virtualization, networking, and cloud computing. He organizes the popular Tech Field Day event series for Gestalt IT and runs Foskett Services. A long-time voice in the storage industry, Stephen has authored numerous articles for industry publications, and is a popular presenter at industry events. He can be found online at TechFieldDay.com, blog.FoskettS.net, and on Twitter at @SFoskett.

Leave a Comment