Intel’s AI Shines Competitively

listen to - something different! podcast on goodpods

In the latest MLPerf Training 3.0 benchmark, both Habana® Gaudi®2 deep learning accelerator and 4th Gen Intel® Xeon® Scalable processor showcased outstanding training results, as announced by MLCommons. These results challenge the prevailing industry narrative that generative AI and large language models (LLMs) can only run on Nvidia GPUs. Intel’s AI solutions offer competitive alternatives, Sandra Rivera, Intel’s executive vice president and general manager of the Data Center and AI Group, emphasized the value that Intel Xeon processors and Gaudi deep learning accelerators bring to AI workloads. Xeon processors, with built-in accelerators, excel in running volume AI workloads on general-purpose processors, while Gaudi delivers impressive performance for LLMs and large language models. Intel’s scalable systems, coupled with optimized open software, empower customers and partners to deploy a wide range of AI-based solutions across the data center, from the cloud to the intelligent edge.
The MLPerf Training 3.0 results highlight the performance and cost advantages of Intel’s AI solutions. Gaudi2, the Habana deep learning accelerator, demonstrated remarkable time-to-train results on the 175 billion parameter GPT-3 model. It delivered efficient scalability, near-linear scaling, and excellent training results on computer vision and natural language processing models. The results also showcased growing software maturity, with performance increases compared to previous submissions.
On the CPU front, 4th Gen Xeon processors with Intel AI engines demonstrated their capability to build universal AI systems. These processors deliver exceptional deep learning training performance, enabling data pre-processing, model training, and deployment in a single system. The MLPerf results further validate Intel Xeon processors as versatile options for deploying AI on general-purpose systems, eliminating the need for dedicated AI infrastructure.
The results published by MLPerf reaffirm Intel’s commitment to AI performance and its ongoing support for the growing demand for generative AI and LLMs. The Gaudi2 platform’s software support continues to mature, offering optimized scaling performance efficiency for LLMs. Furthermore, Intel’s focus on software enhancements, including the introduction of new features and data types, promises even greater performance improvements in the future.
MLPerf, as the leading benchmark for AI performance, ensures fair and repeatable comparisons across solutions. Intel’s continued submissions, surpassing the 100-submission milestone, underline its dedication to transparency and adherence to industry-standard deep-learning ecosystem software. The results also showcase the efficiency and scalability enabled by Intel Ethernet 800 Series network adapters, leveraging the Intel Ethernet Fabric Suite Software based on Intel oneAPI.
In conclusion, Intel’s impressive performance in the MLPerf Training 3.0 benchmark demonstrates the competitive options it offers to customers seeking efficient and scalable AI solutions. By breaking free from closed ecosystems, Intel empowers enterprises to unlock the full potential of generative AI and large language models.

Leave a Reply