All Tech Field Day Events

Intel IPU – Unlocking the Power of Functional Isolation

Technology is advancing rapidly. The parade of new innovations that have seen the light of day in just a few years’ time is telling of how rapidly the advancements are building up on top of one another. And the footprint only continues to grow.

New Problems Demand New Solutions

Much of these next-gen technologies are developed in and delivered from the cloud. The cloud provides the ultimate infrastructure for innovation – on-demand resources, easy renting and little to no housekeeping.

Fun fact, the glitzy cloud data centers are no different than the private data centers. The racks are modeled on the same server architecture as enterprise data centers.

In the classic architecture, all of the software and services run off of the CPU, the brain of the computer server. Applications, infrastructure services, operating systems, hypervisors – the CPU caters to all.

There are significant drawbacks that make this blueprint unsuitable for cloud. For one, it is meant for use by a single party only.

Today, CSPs support a near-infinite number of parties and tenants out of the cloud data centers. Doing that amount of processing off of the CPU is impractical. Not only services have to constantly compete for resources and take hits on performance periodically, the design also poses a problem for the cloud business model.

Cloud operators are in the business of offering compute-as-a-service. Naturally, they want everything that is not associated with the service removed from the CPU. At the top of their list is infrastructure service.

“There are two reasons for this,” says Thomas Scheibe, VP of solutions and business development at Intel. “One, you don’t want to burn the CPU that you want to make money off of. The other, you want to have a hardening around security because you don’t want to have to use the app running in the same space where you run your infrastructure services.”

Intel likens it to the problem of comparing hotels to homes. If homes and hotels were built on the same architectural design, it would have been a disaster.

Disaggregation as the Foundation for Infrastructure Offloading

The Intel Infrastructure Processing Unit, or IPU, provides a path for infrastructure offloading through disaggregation. Co-designed with Google, the IPU is a game-changing technology that alters the way processing happens within the server.

“IPU is a fun product with a bunch of networking and a bunch of compute – best of all worlds,” says Scheibe.

The IPU forges a design where the CPU does not carry the full weight of processing. Instead, it allows the load to be shared between the CPU and an improved NIC (Network Interface Card), which is essentially what the IPU is.

Standard NICs do not have embedded cores. Straddling the categories of Ethernet NIC and AI-optimized NIC, the IPU is an ethernet NIC that has an embedded CPU complex that lets it do additional stuff.

“The IPU is the most premium, flexible NIC you can think of – you have standard Ethernet path and all the flexibility of an embedded set of cores,” he says.

There are many advantages to this. First, all infrastructure services are offloaded from the CPU onto the IPU. This itself lowers server overheads leading to performance gains, which can then be leveraged for applications, the part of the business that generates revenue.

The other big advantage is that functional isolation gives CSPs complete control of the infrastructure.

“You can decouple what you want to do in terms of feature development on the infrastructure side, from what you do for customers that are actually using the main host.”

As tasks like networking, security and storage are performed on the IPU, a provider can gain infrastructure acceleration through the hardware accelerators.

“You will build, in these IPUs, hardware accelerators, whether it’s encryption or decompression-compression. You basically have the chance to just add that little node because it’s silicon, and you just have this as an accelerator sitting right there.”

The salvaged host compute cycles previously spent on doing infrastructure tasks can be utilized to power the workloads.

With the two parts separated, tenants have the CPU all to themselves. “Now you can look at these different use cases you can really go after,” Scheibe says.

Intended Areas of Use

Intel proposes a host of use cases for the Intel IPU. Key among them are cloud and edge.

“Where this is going and where we see a lot of interest is moving from the public deployments to more of a private cloud operations model deployment, and service edge where you actually want to have real separation of service and the edge.”

At the edge, IPUs introduce network and compute resources in edge appliances. The IPUs virtually serve as plug-and-play compute units. This eliminates the need for building big, dedicated edge servers.

Scheibe also highlights edge inference as a major use case where the decoupling provides special benefits in terms of isolating and securing the models running on the host from the rest of infrastructure.

Be sure to check out Intel’s presentations from the Networking Field Day event to learn more about Intel IPUs.

About the author

Sulagna Saha

Sulagna Saha is a writer at Gestalt IT where she covers all the latest in enterprise IT. She has written widely on miscellaneous topics. On gestaltit.com she writes about the hottest technologies in Cloud, AI, Security and sundry.

A writer by day and reader by night, Sulagna can be found busy with a book or browsing through a bookstore in her free time. She also likes cooking fancy things on leisurely weekends. Traveling and movies are other things high on her list of passions. Sulagna works out of the Gestalt IT office in Hudson, Ohio.

Leave a Comment