All Intel Intel 2021 Sponsored Tech Note

What Makes the Latest Intel Xeon Platform an AI Workhorse?

On April 6th, 2021, Intel announced a new Xeon platform as the “only data-center processor with Built-in AI acceleration” and that it will bring Artificial Intelligence (AI) everywhere from the Edge to the Cloud. In this blog, I am looking at the announcement from an AI perspective focusing on the new 3rd Gen Intel Xeon Scalable processor.

An AI application’s performance is driven by how efficiently all the hardware components (processing, memory, network) work together and the ability for the software to take full advantage of the underlying hardware. I was pleasantly surprised to see that the announcement touched on all those facets.

Why Xeon?

Let us first have a look at why Intel believes customers will choose Xeon for AI. The three main reasons are:


People are already familiar with the Xeon architecture and the Intel ecosystem. It makes it easier to get up and running with AI and reduces the complexity of dealing with other architectures from accelerators such as GPUs.


Intel has put a lot of effort and resources into expanding its software ecosystems with partners and optimizing end-to-end data science tools. The Intel ecosystem allows developers to build and deploy AI solutions faster with a lower TCO benefit.


The performance is obtained by maximizing the synergies between hardware and software. First, the processors received new AI capabilities and additionally tuned for a wide range of workloads. Secondly, Intel-optimized software extensions enable applications to take full advantage of the processor’s capabilities.

Hardware AI Acceleration

When Intel talks about “built-in AI acceleration”, they are referring to the Intel Deep Learning Boost (aka DL Boost) technology.

Even the most straightforward Deep Learning workflow requires the execution of billions of mathematical operations. It is not difficult to imagine that if we could optimize those operations, there could be significant performance improvements. Some operations require only 8-bits of floating-point precision, while others require 16-bits or even 32-bits. The more bits are involved in multiplication, the longer it takes to execute.

DL Boost’s built-in power can perform multiplications at a lower precision while still returning results at higher precision. The impact is that more multiplications can occur in the same amount of time. Considering this happens billions of times, the acceleration gains quickly add up.

The latest iteration of DL Boost shows up to 1.74x performance improvement against the previous generation. Figure 1 depicts the support for DL Boost across generations with more improvements to expect in the future.

Figure 1 – DL boost support over time

Software Optimization

To get the most out of hardware innovation, you need to have software integration. It is a real challenge to develop efficient pipelines that get you from the data processing stage to model training and finally deploy the model—the ability to consolidate the different stages into a single platform results in improved productivity and scalability.

The frequent innovations in AI make it difficult for developers to keep up with all the changes. A supported software ecosystem provides an abstraction layer, allowing developers to stay focused on solving their problems.

Intel understands the need for software optimizations and has a rich ecosystem of AI tools available to its customers. There are too many tools to list them all (Figure 2), but the following tools are worth a closer look:

  • Analytics Zoo delivers unified analytics and an AI platform for distributed TensorFlow, PyTorch, Keras, and Apache Spark.
  • For customers who want a Python-based toolchain, Intel has the oneAPI AI analytics toolkit.
  • Developers looking to deploy pre-trained models from Edge to Cloud should look at the OpenVINO toolkit.
Figure 2 – AI made flexible with Intel Software Optimizations


The latest Intel announcements outline that AI applications, from Edge to Cloud, are the future. There are two main parallel tracks to increase their market share. The first track is to consistently deliver innovative hardware to address the many AI challenges with built-in acceleration. The second track is to keep investing time and resources in building ecosystems that provide software optimization. This combo makes the Xeon a true AI workhorse and an AI enabler for their customers. Additionally, it is a solid strategy to build on towards the future.

Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.  

About the author

Frederic Van Haren

Leave a Comment