Controversy over AI, from ChatGPT to Bard, have been all over the news, and the impact of Large Language Models is hitting enterprise IT. We’re seeing litigation against GitHub, Microsoft, and OpenAI just as the US Congress heard testimony by OpenAI CEO Sam Altman. And of course IT pros everywhere are grappling with the various ways their jobs will be affected by AI. Let’s take a closer look at AI in the enterprise. This and more on this week’s Rundown.
0:58 – Hammerspace Acquires RozoFS
Data orchestrator Hammerspace just shared that it has acquired French startup RozoFS for its erasure coding technology, known as the Mojette transform. The technology provides a balance of price, performance, efficiency, resiliency, and availability, making it valuable for organizations with large amounts of data in multi-site and hybrid cloud environments. Hammerspace plans to integrate RozoFS’s technology into its Data Orchestration System to enhance data workflows, accelerate analytics, and improve collaboration. He’ve covered Hammerspace frequently here at Gestalt IT and they have presented at Tech Field Day many times in the past, so maybe we can provide some perspective on the acquisition.
Read More: Hammerspace buys RozoFS for erasure coding tech
3:15 – Google I/O 2023 happened
Google showed off some new devices, like the pricey Pixel Fold and Pixel Tablet. It also took this moment to re-establish itself as a leader in AI. Improvements to Bard, Duet AI coming to Workspaces to do things like help compose emails in GMail or create slides in Slides, they are also pushing integrations with Google Search to give possibly provide more useful search results, you can test that out now in Labs. PaLM2 is their answer to GPT4 and is integrated with Bard. Improvements allow some portions to run locally on devices instead of hitting the cloud.
Read More: What you need to know from today’s Google IO: PaLM 2, Pixel Fold, AI everywhere
6:49 – VAST Data’s Platform Certified as Datastore for NVIDIA DGX SuperPOD
VAST Data has achieved certification for its Data Platform as a certified datastore for NVIDIA DGX SuperPOD, providing organizations with scalable and agile performance for modern AI workloads. The solutions makes use of VAST’s shared-everything architecture with multi-tenancy and quality of service controls, and is the first network-attached storage device to be approved by NVIDIA. Since you’ve seen a lot of VAST Data and NVIDIA at Field Day events in the past, including at AI Field Day, what do you think of this announcement?
Read More: VAST Data Achieves NVIDIA DGX SuperPOD Certification, Offering Simple and Scalable Enterprise File Services to Supercharge Generative AI Infrastructure
10:04 – Microsoft and AMD are Teaming Up
Microsoft uses 10s of thousands of NVidia GPUs in its datacenter and probably pays a pretty penny for that privilege. Bloomberg reports that they are dedicated engineering resources to assist AMD in the development of their GPUs to serve as a counterweight to NVidia’s dominance. Microsoft has had similar success offering an AMD version of Azure VMs at slightly discounted prices.
Read More: Microsoft and AMD are reportedly teaming up to combat Nvidia’s AI dominance
12:42 – Solidigm’s D5-P5430 High-Density Data Center SSD
Solidigm has introduced the D5-P5430, a new QLC SSD optimized for mainstream and read-intensive workloads. The SSD offers high storage density and equivalent read performance to TLC SSDs, making it suitable for applications such as email, object storage, and video-on-demand. Solidigm claims that it reduces total cost of ownership by up to 27%, increases storage density by 1.5 times, and lowers energy costs by 18%. We expect to see the D5-P5430 across a wide range of server and storage configurations, since it is designed to address data center issues like power efficiency and infrastructure sustainability. How should we view this new SSD?
Read More: Solidigm Introduces the D5-P5430 — A Data Center SSD with Exceptional Density, Performance, and Value
Watch our Roundtable Discussion: Bringing Next-Generation SSD to Enterprise and Cloud with Solidigm and Supermicro
15:47 – AI Controversies Hit Enterprise IT
Controversy over AI, from ChatGPT to Bard, have been all over the news, and the impact of Large Language Models is hitting enterprise IT. We’re seeing litigation against GitHub, Microsoft, and OpenAI just as the US Congress heard testimony by OpenAI CEO Sam Altman. And of course IT pros everywhere are grappling with the various ways their jobs will be affected by AI. Let’s take a closer look at AI in the enterprise.
Read More: GitHub, Microsoft, OpenAI fail to wriggle out of Copilot copyright lawsuit
Read More: In Senate testimony, OpenAI CEO Sam Altman agrees with calls for an AI regulatory agency
Read More: Wikipedia:Large language models
27:27 – The Weeks Ahead
Mobility Field Day 9 – May 17-19, 2023
Cloud Field Day 17 May 31 – June 1, 2023
The Gestalt IT Rundown is a live weekly look at the IT news of the week. It broadcasts live on Facebook every Wednesday at 12:30pm ET. To watch along, “Like” our Facebook page. Be sure to subscribe to Gestalt IT on YouTube for even more weekly video content.
Stephen Foskett: Welcome to the rundown! Each time we meet, we run down the IT news of the week with a variable degree of snarkiness. I’m your host, Stephen Foskett. Joining me today, filling in for Tom Hollingsworth, who’s at Mobility Field Day in California, is an esteemed co-host, Mr. Ned Bellavance. Welcome to the show!
Ned Bellavance: Very excited to be here to talk about some interesting tech stuff. But first, what day is it?
Stephen Foskett: It is actually, amazingly enough, National Pack Rat Day. And if you’re aware of my personal blog, you know that it’s called “Pack Rat.” So, today is, in fact, Stephen Foskett Day.
Ned Bellavance: That should be nationally recognized, I think. Another one that’s nationally recognized is Cherry Cobbler Day, which, let’s be honest, if you’re gonna eat a cobbler, is there any other choice besides cherry? And the answer is now, so whatever you said, the answer is no. But let’s break things down in terms of tech news.
Ned Bellavance: I heard a little rumor that data orchestrator Hammerspace has acquired a French start-up called RozoFS for erasure coding. What do you know about it, Stephen?
Stephen Foskett: Well, it’s interesting, Hammerspace. It’s a company that you’re very familiar with. They’ve come to Tech Field Day many times in the past, and we’ve dived deep into their technology over the years. We have not actually talked to RozoFS in the past, but one of the things about Hammerspace is that essentially the company is designed to make data appear everywhere or anywhere it’s needed. Their whole concept is data orchestration, which is something we’ll be talking about in some other IT contexts. I will include links to that in the show notes.
Stephen Foskett: Overall, one of the things about Hammerspace, though, is that it needs to be able to intelligently distribute data, intelligently protect data, intelligently move data, and of course, the modern way to do that is through erasure coding. So, this French start-up had been working on a new erasure coding technology called “Mojette transform.” I don’t know what that is, but it sounds very cool, and now it’s part of Hammerspace. The company apparently made this move, this acquisition, last year, late last year, but didn’t talk about it. In fact, they didn’t even talk about it to me until it was announced. So, I can’t wait to see what they do with this technology.
Stephen Foskett: I have to say, Hammerspace is really surprising us all with their growth and their market success. This is a company that is basically a scrappy bunch of industry veterans, and it has attracted a great group of people, people that I’ve known for years and years. And they’re really building something cool over there. So, if you’re not aware of Hammerspace, give them a shot, and I can’t wait to see the next version with this RozoFS technology built into it.
Stephen Foskett: Ned, Google showed off some new devices like the pricey Pixel phone and the Folding Pixel tablet. It also took a moment to reestablish itself as a leader in AI. They showed improvements to Bard and Duet AI, coming to workspaces to do things like help compose emails in Gmail or create slides. What’s your take on Google I/O 2023?
Ned Bellavance: Well, the interesting thing about this year’s Google I/O is that it was a single day. They really crammed a lot of information into that keynote and that single day, and it’s a three-hour keynote. So if you want to go watch it and you have three hours to burn, that’s fine. But there are also a bunch of really good summary YouTube videos out there that kind of snippet into the main points. Maybe they could’ve used their own AI to summarize the keynote.
Ned Bellavance: Speaking of AI, they did introduce a new version of their modeling, which is called Palm 2, the successor to Palm 1, which was just called Palm. And this is really their answer to GPT-4 in terms of scope, scale, and capabilities. And now they’re working on integrating this new model with all of their products in any way that they can. For instance, they have a new product called Duet AI, which gives you the idea that when you’re working in these various office applications in their Workspace Suite, the AI is there to help you, to work with you, to compose things like you’re singing a duet.
Ned Bellavance: They showed examples of wanting to compose an email and giving the bare-bones content and tone of voice, and then it will try to generate that email for you. And then you can use follow-up prompts to have it refine the contents of that email until it’s close enough that you just want to make a few quick edits. That was pretty cool. The other thing they talked about, as you mentioned, was being able to develop slides in Slides. That’s something that I am particularly interested in because I don’t enjoy creating slide decks. I usually have a picture and an idea in my head, but I don’t want to do all the clicking around to make it happen. So, it would be lovely if an AI could take over for me.
Ned Bellavance: As for the devices they introduced, there were a few of them. The big splashy one was probably the Pixel Fold, which is a foldable phone. It’s similar to what Samsung already has on the market, and interestingly, it has a front screen and then an internal screen that folds out to the full size, but it’s not that much thicker than the current Pixel phone. So if foldables interest you, you might want to check that out. They also introduced the Pixel tablet, their first entry into the tablet market in quite some time. And this one is basically like a Google Home or the Nest Max, but the screen is detachable. It comes with a speaker base that you can magnetically attach to, but then you can just pull it off and walk around and do tablet-y things with it. So if that’s of interest to you, I believe that one’s available right now, and the Fold is available for preorder.
Ned Bellavance: Turning our gaze back to storage and storage types. There’s a fascinating story about VAST Data achieving certification for its data platform as a certified data store for Nvidia DGX SuperPOD, which provides organizations with scalable and agile performance for modern AI workloads. I wasn’t familiar with this certification, so can you expand on what VAST is doing with Nvidia?
Stephen Foskett: This is actually a very exciting story. First off, let’s just say that VAST has been an incredibly impressive company. We’ve seen them at Tech Field Day many times. I want to call your attention, especially to Storage Field Day 23, where VAST presented and they also brought in Nvidia to talk about integration between the two companies. So, I wonder if that’s related to this.
Stephen Foskett: VAST Data makes this data platform, which is basically an incredibly scalable NVMe over Fabrics platform for enterprise that uses flash, including QLC flash, but offers unmatched scalability, reliability, and high availability. It’s a pretty great high-end storage platform. But Nvidia’s DGX SuperPOD has never supported an enterprise NVMe over Fabrics platform that I’m aware of prior to this. The DGX SuperPOD is basically a way to deploy your own AI supercomputer using Nvidia’s latest technology. But the DGX SuperPOD needs to have additional components, especially storage, to go along with it. So, all the companies in the storage industry have been jockeying to be a good partner for Nvidia for the SuperPODs because everybody wants to be able to power the latest AI supercomputers. And that’s what VAST is offering now.
Stephen Foskett: Essentially, the VAST Data platform is certified as a data store for DGX, meaning that enterprises or hyperscalers or whoever wants to have their own Nvidia SuperPOD can just go and deploy this thing in a kind of cookie-cutter, recipe-like fashion. It supports all the latest machine learning, deep learning workloads, and of course, includes all the VAST capabilities, including scalability and performance, like I mentioned earlier. Another aspect of VAST is that it provides improved multi-tenancy, which might be interesting for some cloud providers, along with encryption, authentication, and all that sort of thing. And one of the big things that the VAST platform offers is quality of service controls. In other words, you can basically specify that a workload needs a certain level of service, and it’s important to meet those throughput and latency requirements, and so on. Anyway, it’s a really impressive product on its own, and combining that with the Nvidia DGX platform is a great move for VAST. Honestly, this is just the next step in the world’s acceptance of VAST as the next great storage company.
Stephen Foskett: Ned, Microsoft uses tens of thousands of Nvidia GPUs in its data center and probably pays a pretty penny for that privilege. Bloomberg is reporting that they are dedicating engineering resources to assist AMD in the development of their GPUs to provide a counterweight, maybe to Nvidia’s dominance. Microsoft has had similar success with an AMD version of Azure VMs at a slightly discounted price. What would you expect to see here?
Ned Bellavance: My expectation is that, as you said, Microsoft is currently using tens of thousands of Nvidia GPUs today, probably for various tasks like we discussed in the previous article. And because of that, they’re shelling out a lot of money. When you think about how Microsoft is currently structured, the vast majority of their spending is on capital expenditures for Azure data centers. And they’re looking at those expenses and realizing that the Nvidia item on the list is getting a little out of hand. So maybe they need to prop up their biggest competitor, which is AMD. Maybe they need to give them a helping hand to get their GPU house in order so that they can buy GPUs from them or at least use it as a bargaining chip when negotiating with Nvidia. So, I think that’s certainly what we can expect to see in the next 2 to 3 years – the introduction of more AMD GPU-based offerings within Microsoft Azure.
Ned Bellavance: They’re also turning their attention to running AI workloads on-premises. When you consider the costs of on-premises deployments, especially for companies that may not want to use Nvidia and prefer an alternative hardware platform, Microsoft is interested in offering that option and tuning their software to work with either Nvidia or AMD GPUs. They saw similar success, as you mentioned, when they started offering AMD-based CPUs for their virtualization platform within Azure, charging only a slightly lower price. But if you’re an enterprise running thousands of VMs in Azure and you have the option to go with AMD, there’s probably not much of a downside that you’re seeing, and you’re saving some money on the total cost. So, I think Microsoft is using its vast resources to provide competition to Nvidia by supporting a competitor. I do feel a little bad for AMD because they always seem to be the bridesmaid, never the bride in these situations. But I guess for now, that’s their lot in life.
Ned Bellavance: The next topic we have in the docket is from Solidigm. They introduced a new SSD, the D5P5430. It’s a QLC SSD optimized for mainstream and read-intensive workloads, and it appears to offer high storage density and equivalent read performance to TLC. Stephen, I’m assuming TLC doesn’t stand for tender loving care, so can you take us through what they’ve introduced and what it’s capable of?
Stephen Foskett: Yeah, I understand it may sound like a mishmash of letters and numbers, but in the storage industry, a lot of our terminology can be confusing to those unfamiliar with it. So bear with me as I explain. The D5P5430 from Solidigm is their latest SSD. Now, SSDs have become quite popular and I assume you’re familiar with them. But why is this new one important? Well, it’s all about QLC, which stands for Quad-Level Cell. Solidigm is a leader in QLC, which means instead of storing one bit per cell in NAND flash, they can store four bits. This allows for much greater density and lower power consumption. And that’s what this SSD utilizes.
Stephen Foskett: QLC has had a reputation for being slow or unreliable in the past, but that’s no longer the case. In fact, this is Solidigm’s fourth-generation QLC SSD, and they’ve shown that the technology works exceptionally well. So, what’s special about the P5430? It’s all about the form factor. These SSDs come in various form factors. There’s the traditional 2.5-inch form factor that looks like a regular hard drive. There’s also the E1.S form factor, which is a compact successor to what was known as the “ruler” form factor. And finally, there’s the E3.S form factor, which is the next generation. These form factors allow Solidigm to pack a large number of SSDs into a chassis.
Stephen Foskett: We recently released a video where we spoke with Supermicro about how they’re using the P5430 in their solutions to deliver incredible levels of storage capacity in a compact space. These new form factors provide better cooling, power efficiency, and density. When combined with the benefits of QLC technology and the fact that these SSDs perform exceptionally well and offer high capacities—up to 15 or even 30TB in the future—it’s clear that this is a revolutionary product, even if it may seem like a mishmash of letters and numbers.
Stephen Foskett: Ned, let’s take a closer look at one of the big stories that we’ve been hearing about—AI. It seems like everyone, even your mothers, has heard about ChatGPT and how AI is coming for your job. There’s a lot of hype, but there’s also a lot of reality. The controversy around AI has been hitting the enterprise tech industry as well. ChatGPT is making its way into various corners of the world. We’ve heard about projects from BART, Google, and others bringing AI into different domains. The impact of large language models is finally coming to the forefront, and we’ve seen litigation against GitHub, Microsoft, OpenAI, and even the US Congress holding hearings with OpenAI CEO Sam Altman. So, what do we think about all of this? IT pros are trying to navigate this AI revolution and understand what’s going on. Let’s start with the copilot case.
Ned Bellavance: Sure. I’ve been using the copilot product since its launch. I use it to write code and scripts. It’s been helpful in that regard. Currently, there’s a class-action lawsuit against GitHub, Microsoft, and OpenAI filed by anonymous parties. The lawsuit claims that what’s produced by the copilot’s underlying model, codex, is violating software licenses associated with the repositories it scrapes code from. The allegation standing is that it violates the software licenses and the portions of the DMCA that require attribution for derivative works. This raises the question of how we categorize content created by AI—whether it’s a net new creative work, a derivative work requiring attribution, or something else entirely. The arguments are just beginning, and it will likely take a long time to work through the courts, potentially even reaching the Supreme Court.
Stephen Foskett: I want to talk about the congressional hearings for a moment because, although we try to separate ourselves from politics in IT, this is more about policy. Congress has had past missteps in regulating tech, such as encryption keys, social media regulations, and controversies over Section 230. Now, AI is the next topic to be addressed. Congress has invited Sam Altman from OpenAI to testify and is seeking input from other experts. It’s important that they learn about the technology before enacting regulations. The testimony had a surprisingly calm and friendly tone, considering the recent news stories about chat GPT’s failures and hallucinations. Altman and IBM, who also testified, agreed that regulations should be in place and that they would cooperate with them. They emphasized the goal of using AI to help people and create new services. However, it’s worth noting that these are for-profit companies competing to utilize AI for their own purposes, which can cause issues. The idea of government regulation is nice in theory, but there are challenges. At the moment, there’s a lack of understanding about the technology among lawmakers, and it moves faster than regulations can keep up. We should be cautious about regulations that heavily favor large companies and potentially stifle innovation. A light touch approach is preferable, with regulations written by those who understand the technology and its practical use cases.
Ned Bellavance: I agree with you. Realism is key. We can’t say yes or no to AI entirely. We’re seeing a wide range of applications, some useful and others not so much. Wikipedia actually has a policy that acknowledges the use of large language models for content generation. They recognize the need for responsible usage, requiring editors to fact-check, copy-edit, and take responsibility for the generated content. Enterprises should adopt similar policies to ensure responsible AI usage and prevent shadow AI from emerging within their organizations. While we can’t stop the advancement of AI, we need to be realistic and ensure that we use it appropriately and with proper oversight.
Stephen Foskett: I’m not opposed to AI in fact, Utilizing Tech, the first three seasons of our podcast was focused on AI. We know a lot about it we’ve seen a lot of what it can do and frankly it can do some amazing things. Check out for example, what we saw at Aruba Atmosphere recently in terms of network management. A lot of ways that AI is being used in security in IT as well as of course, as he mentions assisting and helping people to get things done. We can’t not use this stuff because frankly it’s there and it helps us but also we as technologist have to be very realistic about what it can and what it can’t do.
Stephen: Thank you so much, Ned, for joining us for this episode of the Gestalt IT Rundown. Before we go, let’s take a look at what’s going on. As I mentioned, it’s Mobility Field Day 9, so I urge you to check that out. We’ve got Arista, Celona, Cisco, Fortinet, Juniper, NetAlly, and Ruckus presenting today through Friday. And of course, if you missed any of that, you can find it on LinkedIn and YouTube. Just use your favorite search engine for Mobility Field Day 9. Also, we have Cloud Field Day coming up in a few weeks, so we’ll have companies like Morpheus Data, RackN, Zerto, JetStream, and somebody named Ned Bellavance, I think, is going to be there. Also, we’re going to be at Cisco Live US. We’ll hear from Cisco, BackBox, Opengear, and others. Check out Tech Field Day dot com to learn more about that. Ned, can you tell us a little bit about yourself and where we can find more from you?
Ned: Sure. My name is Ned Bellavance. The easiest way to find me is to go to my website, NedInTheCloud.com, or find me on LinkedIn. That’s probably the best place to find me now. I’ve been using less and less Twitter, to be honest.
Stephen: Yeah, me too, actually. You can find me on LinkedIn as well. That seems to be the big site that I’m using a lot now, as well as on Mastodon. You’ll find me there too. Well, thanks so much for watching the Gestalt IT Rundown. You can catch new episodes of the Rundown every Wednesday as a YouTube video, or you can find us in your favorite podcast application as an audio podcast. We’ll be back next Wednesday to talk about the IT news of the week that was. Until then, for myself, for Ned Bellavance, for the gallivanting Tom Hollingsworth, and all of us here in the Gestalt IT family, thanks for watching. Have a great week, and we’ll see you next time.