AI spans a wide range of workloads and use cases, from data pre-processing and classical machine learning to deep learning models such as language processing and image recognition. Intel® Xeon® Scalable processors with Intel® AI Engines combine powerful compute performance for the entire AI pipeline plus built-in accelerators for specific AI workloads in machine learning, data analysis and deep learning.
AI is pervasive and stretches across diverse and critical workloads. Classic machine learning (ML) and deep-learning models are becoming basic building blocks of how business gets done, from core enterprise applications to automated voice attendants. Putting AI to work at scale depends on a lengthy development pipeline that flows from data pre-processing to training to deployment. Each step has its own development toolchains, frameworks and workloads — all of which create unique bottlenecks and place distinct demands on computing resources. Intel Xeon Scalable processors feature built-in accelerators that can be used to run the entire pipeline right out of the box and increase AI performance across the board. Intel® Accelerator Engines are purpose-built integrated accelerators that support the most demanding emerging workloads