All Utilizing Tech

Looking Forward to the Year of AI with Frederic Van Haren and Mark Beccue | Utilizing Tech 06×00

In the two years since we focused on AI on this podcast, OpenAI added a simple conversational interface to their deep-learning model and AI has exploded on society. This season of Utilizing Tech focuses on practical applications for artificial intelligence and features co-hosts Frederic Van Haren and Mark Beccue along with Stephen Foskett. In this episode we return to AI with a look back at the incredible impact that generative AI has had on society. Humans traditionally interfaced with machines using keyboard and mouse, touch and gesture, but ChatGPT changed all that by enabling people to communicate with computers verbally. But this is just one of many potential AI model components that can be used to build business applications. The true power of generative AI will be realized when these other components appear, and when they are able to integrate custom data. We will also see innovation in the AI infrastructure stack, from GPUs to NPUs to CPUs, storage and data platforms, and even client devices.


What the Past and Present Are Telling about the Future of AI: In the New Season of the Utilizing Tech Podcast

The world is on the cusp of cosmic changes. We are entering the age of AI where human intelligence is replicated by bots. AI-powered technologies have brought ashore prolific possibilities of accelerating human efforts, even outdoing them. Silicon Valley executives are hitching their hopes and fortunes to the tremendous promise of AI.

As a vigorous debate is forming over what lies beyond this point, this season of Utilizing Tech Podcast takes a look back at the journey so far, and anticipates what AI will eventually look like some years out. Host, Stephen Foskett, and this episode’s guests – Frederic Van Haren, Founder of HighFens, and Mark Beccue, AI Research Director at The Futurum Group and host of The AI Moment, share their thoughts on the AI disruption.

Classic Tech Disruption

The conversation around AI is heavily underpinned by Generative AI. With the release of ChatGPT, the AI game has skyrocketed unlike how anybody had imagined. It’s unleashed excitement and fear in equal proportions in the masses. Stephen Foskett points to this as a Mackintosh-like moment that forever changed the way people interacted with computers. From being a blackbox, computers became a coveted object sparking sweeping curiosity and universal adoption.

The history is repeating once again with the debut of conversational AI. ChatGPT’s skills have not only been a source of curiosity and awe, but for the first time, it has given users access to something that’s regarded forbidding and abstruse.

But AI has existed long before ChatGPT was born. It has been used in fields like aviation for decades. Why then is the impact of GenAI so profound on the users? “What changed everything was the brilliant idea by the OpenAI people to put a conversational interface in front of the models which meant that people could use natural language to manipulate them to make them do something,” say Beccue.

But democratizing AI is not its singular achievement. As users cozied up to this shiny new technology, stories of hallucinations and errors flowed freely. “When you have something come in that’s this disruptive, it sparks innovation, as well as a lot of mess. It’s classic tech disruption.” he adds.

If you peek under the hood, chatbots like ChatGPT are powered by something called large language models, also called LLMs. They are what make it the tech equivalent of a Swiss-knife. Pre-ChatGPT, these models used to be proprietary that enterprises owned and trained with the capabilities they required for their individual business cases.

This process is extremely complicated and time-intensive, but more and most, its cost-heavy. “The model is so big and so complex that only a handful of organizations in the world can actually produce it,” commented Van Haren.

OpenAI debuted the concept of transfer learning where companies could repurpose ChatGPT’s algorithms for their tasks. The big tech companies have joined the trend, helping out smaller peers build their AI technologies by sharing with them pre-trained deep learning models that they funded.

“That brings a major acceleration factor to the market because now you can start with something that took thousands of hours of GPU. I think that’s important not only from an acceleration standpoint, but also from a usage standpoint because by providing a prebuilt model, they are also sharing data,” says Van Haren.

Into the Future

For the average users, the human-like chatbot writes fluid prose and poetry on endless topics. But a far more important thing it does happens behind the scenes. It interprets and follows written prompts. It can read through bad grammar, poorly formed syntaxes, wrong spellings and complex commands, turning out decent results – a feat that Google’s interface took years of training to get to.

The success of ChatGPT is hard evidence that LLMs are the portkeys to AI. LLMs are to AI what peripherals are to computers, says Van Haren. They are the interface between the machine and the users. It is a building block, but not an end result.

So is conversational AI only destined to do small things like generating rough first drafts, or writing business proposals, or is it capable of going beyond those prosaic tasks and do something greater?

The panel pushed back on the fervor that AI is good for writing emails, proposals, sales copies and sundry. It is the last thing you want to do with it, they opine. Tools like ChatGPT maybe capable of pulling off many cool tricks, but writing business content that is personal, persuasive and precise is not among its strengths.

There is still uncertainty around what tasks AI can do at the enterprise level. “I think that’s still undetermined. There are some things that are starting to surface that make sense, but there’s a lot that don’t. There seems to be a lot of promise around shortening the code development process if you think about how useful that is, and what the ROI tends to be,” points Beccue.

Experts maintain that large language models are born to do learning and inferencing, and the business case that these capabilities best serve is data analytics. Working with colossal repositories of data gives the algorithms ability to find atomic details and complex patterns in the data. That’s one thing that they beat humans at solving.

Enterprises have enthusiastically adopted AI over the past years, thanks in no small part to the success of GenAI. But it has also stoked concerns among smaller organizations around infrastructure costs. Building out AI systems require extraordinary amounts of money and compute power. That kind of resource only the industry behemoths control. A bootstrapped startup has no access to it.   

 “For enterprises to really lean in and for market adoption to accelerate, you have to have economies of scale. Price has to work. If you set aside the worry of training models with massive AI compute workload, inference in itself is still big, and its ongoing. The estimates of cost are astronomical, and it’s won’t economically work. So, it has to change,” argues Beccue.

There are reasons to believe that down the line, AI deployments will get a reasonably cheap. Big players like Intel and NIVIDIA are only starting to realize how well their top-of-the-line GPUs are working for AI deployments. “They are in a great spot already. What we did notice is now, really rapidly, other players are coming in and saying that they have purpose-built hardware for AI workloads,” Beccue says.

This will likely create a low-pressure pocket that vendors will want to swoop down and fill.

Wrapping Up

Experts believe that cost-effective AI might becoming a reality in the future. The hardware market is expected to dramatically reshape in the coming years to accommodate broader adoption of AI. As it bursts more and more into mainstream, industries will find more ways to leverage artificial intelligence for business cases which is its true purpose– to aid and accelerate human efforts.

A new season of Utilizing Tech Podcast, focusing on AI is here on Gestalt IT. Follow the conversations and get insights on the AI trends and markets. You can also follow Mark Beccue’s podcast – The AI Moment – to be stay on top of the latest AI trends.

Podcast Information


Thank you for listening to Utilizing AI, part of the Utilizing Tech podcast series. If you enjoyed this discussion, please subscribe in your favorite podcast application and consider leaving us a rating and a nice review on Apple Podcasts or Spotify. This podcast was brought to you by Gestalt IT, now part of The Futurum Group. For show notes and more episodes, head to our dedicated Utilizing Tech Website or find us on X/Twitter and Mastodon at Utilizing Tech.

About the author

Stephen Foskett

Stephen Foskett is an active participant in the world of enterprise information technology, currently focusing on enterprise storage, server virtualization, networking, and cloud computing. He organizes the popular Tech Field Day event series for Gestalt IT and runs Foskett Services. A long-time voice in the storage industry, Stephen has authored numerous articles for industry publications, and is a popular presenter at industry events. He can be found online at TechFieldDay.com, blog.FoskettS.net, and on Twitter at @SFoskett.

Leave a Comment