While this solution with the latest generation of Intel® hardware and VMware software supports a variety of applications, the benefits for AI workloads like image classification and natural language processing (N LP) are particularly compelling. The built-in Al acceleration from Intel® Advanced Matrix Extensions (Inter AMX) is specifically designed to provide massive speedup to the low-precision math operations that underpin Al inference. Mainstream applications already running on vSAN and Intel Xeon processors—such as databases, analytics, business-critical and collaboration applications, and IT automation tools—are being enhanced with Al algorithms and can benefit from Intel AMX. The result is a completely optimized pipeline on a single hardware and software platform that can scale from data center to cloud to edge. Scale Al everywhere by using the broad, open software ecosystem and unique Intel tools. Customers can utilize their large and valuable vSAN data store on standard Intel Xeon processor-based servers, while gaining the efficiency and performance of a built-in Al accelerator.