Intel Accelerates AI Everywhere at Computex 2024; Redefines Compute Power, Performance and Affordability with new Xeon 6, Gaudi Accelerators and Lunar Lake Architecture to Grow AI PC Leadership

AI runs best on Intel across the compute continuum from the data center, cloud and network to the edge and PC.

NEWS HIGHLIGHTS

  • Launches Intel® Xeon® 6 processors with Efficient-cores (E-cores), delivering performance and power efficiency for high-density, scale-out workloads in the data center. Enables 3:1 rack consolidation, rack-level performance gains of up to 4.2x and performance per watt gains of up to 2.6x1.
  • Announces pricing for Intel® Gaudi® 2 and Intel® Gaudi® 3 AI accelerator kits, delivering high performance with up to one-third lower cost compared to competitive platforms2. The combination of Xeon processors with Gaudi AI accelerators in a system offers a powerful solution for making AI faster, cheaper and more accessible.
  • Unveils Lunar Lake client processor architecture to continue to grow the AI PC category. The next generation of AI PCs – with breakthrough x86 power efficiency and no-compromise application compatibility – will deliver up to 40% lower system-on-chip (SoC) power when compared with the previous generation3.

TAIPEI, Taiwan — (BUSINESS WIRE) — June 3, 2024 — Today at Computex, Intel unveiled cutting-edge technologies and architectures poised to dramatically accelerate the AI ecosystem – from the data center, cloud and network to the edge and PC. With more processing power, leading-edge power efficiency and low total cost of ownership (TCO), customers can now capture the complete AI system opportunity.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240603799554/en/

At Computex in Taipei, Taiwan, on June 4, 2024, Intel launched the Intel Xeon 6 processors with Efficient-cores (E-cores). For companies looking to refresh aging infrastructure to help reduce costs and free up space, Intel Xeon 6 with E-cores offers significant rack density advantages, enabling a 3-to-1 rack-level consolidation. (Credit: Intel Corporation)

At Computex in Taipei, Taiwan, on June 4, 2024, Intel launched the Intel Xeon 6 processors with Efficient-cores (E-cores). For companies looking to refresh aging infrastructure to help reduce costs and free up space, Intel Xeon 6 with E-cores offers significant rack density advantages, enabling a 3-to-1 rack-level consolidation. (Credit: Intel Corporation)

“AI is driving one of the most consequential eras of innovation the industry has ever seen,” said Intel CEO Pat Gelsinger. “The magic of silicon is once again enabling exponential advancements in computing that will push the boundaries of human potential and power the global economy for years to come.”

More: Intel at Computex 2024 (Press Kit)

Gelsinger continued, “Intel is one of the only companies in the world innovating across the full spectrum of the AI market opportunity – from semiconductor manufacturing to PC, network, edge and data center systems. Our latest Xeon, Gaudi and Core Ultra platforms, combined with the power of our hardware and software ecosystem, are delivering the flexible, secure, sustainable and cost-effective solutions our customers need to maximize the immense opportunities ahead.”

Intel Enables AI Everywhere

During his Computex keynote, Gelsinger highlighted the benefits of open standards and Intel’s powerful ecosystem helping to accelerate the AI opportunity. He was joined by luminaries and industry-leading companies voicing support, including Acer Chairman and CEO Jason Chen, ASUS Chairman Jonney Shih, Microsoft Chairman and CEO Satya Nadella, and Inventec’s President Jack Tsai, among others.

Gelsinger and others made it clear that Intel is revolutionizing AI innovation and delivering next-generation technologies ahead of schedule. In just six months, the company went from launching 5th Gen Intel® Xeon® processors to introducing the inaugural member of the Xeon 6 family; from previewing Gaudi AI accelerators to offering enterprise customers a cost-effective, high-performance generative AI (GenAI) training and inference system; and from ushering in the AI PC era with Intel® Core™ Ultra processors in more than 8 million devices to unveiling the forthcoming client architecture slated for release later this year.

With these developments, Intel is accelerating execution while pushing the boundaries of innovation and production speed to democratize AI and catalyze industries.

Modernizing the Data Center for AI: Intel Xeon 6 Processors Improve Performance and Power Efficiency for High-Density, Scale-Out Workloads

As digital transformations accelerate, companies face mounting pressures to refresh their aging data center systems to capture cost savings, achieve sustainability goals, maximize physical floor and rack space, and create brand-new digital capabilities across the enterprise.

The entire Xeon 6 platform and family of processors is purpose-built for addressing these challenges with both E-core (Efficient-core) and P-core (Performance-core) SKUs to address the broad array of use cases and workloads, from AI and other high-performance compute needs to scalable cloud-native applications. Both E-cores and P-cores are built on a compatible architecture with a shared software stack and an open ecosystem of hardware and software vendors.

The first of the Xeon 6 processors to debut is the Intel Xeon 6 E-core (code-named Sierra Forest), which is available beginning today. Xeon 6 P-cores (code-named Granite Rapids) are expected to launch next quarter.

With high core density and exceptional performance per watt, Intel Xeon 6 E-core delivers efficient compute with significantly lower energy costs. The improved performance with increased power efficiency is perfect for the most demanding high-density, scale-out workloads, including cloud-native applications and content delivery networks, network microservices and consumer digital services.

Additionally, Xeon 6 E-core has tremendous density advantages, enabling rack-level consolidation of 3-to-1, providing customers with a rack-level performance gain of up to 4.2x and performance per watt gain of up to 2.6x when compared with 2nd Gen Intel® Xeon® processors on media transcode workloads1. Using less power and rack space, Xeon 6 processors free up compute capacity and infrastructure for innovative new AI projects.

Fact Sheet: Intel Xeon 6 Processors

Providing High Performance GenAI at Significantly Lower Total Cost with Intel Gaudi AI Accelerators

Today, harnessing the power of generative AI becomes faster and less expensive. As the dominant infrastructure choice, x86 operates at scale in nearly all data center environments, serving as the foundation for integrating the power of AI while ensuring cost-effective interoperability and the tremendous benefits of an open ecosystem of developers and customers.

Intel Xeon processors are the ideal CPU head node for AI workloads and operate in a system with Intel Gaudi AI accelerators, which are purposely designed for AI workloads. Together, these two offer a powerful solution that seamlessly integrates into existing infrastructure.

As the only MLPerf-benchmarked alternative to Nvidia H100 for training and inference of large language models (LLM), the Gaudi architecture gives customers the GenAI performance they seek with a price-performance advantage that provides choice and fast deployment time at lower total cost of operating.

A standard AI kit including eight Intel Gaudi 2 accelerators with a universal baseboard (UBB) offered to system providers at $65,000 is estimated to be one-third the cost of comparable competitive platforms. A kit including eight Intel Gaudi 3 accelerators with a UBB will list at $125,000, estimated to be two-thirds the cost of comparable competitive platforms4.

Intel Gaudi 3 accelerators will deliver significant performance improvements for training and inference tasks on leading GenAI models, helping enterprises unlock the value in their proprietary data. Intel Gaudi 3 in an 8,192-accelerator cluster is projected to offer up to 40% faster time-to-train5 versus the equivalent size Nvidia H100 GPU cluster and up to 15% faster training6 throughput for a 64-accelerator cluster versus Nvidia H100 on the Llama2-70B model. In addition, Intel Gaudi 3 is projected to offer an average of up to 2x faster inferencing7 versus Nvidia H100, running popular LLMs such as Llama-70B and Mistral-7B.

To make these AI systems broadly available, Intel is collaborating with at least 10 top global system providers, including six new providers who announced they’re bringing Intel Gaudi 3 to market. Today’s new collaborators include Asus, Foxconn, Gigabyte, Inventec, Quanta and Wistron, expanding the production offerings from leading system providers Dell, Hewlett Packard Enterprise, Lenovo and Supermicro.

Accelerating On-Device AI for laptop PCs; New Architecture Delivers 3x AI Compute and Incredible Power-Efficiency

Beyond the data center, Intel is scaling its AI footprint at the edge and in the PC. With more than 90,000 edge deployments and 200 million CPUs delivered to the ecosystem, Intel has enabled enterprise choice for decades.

1 | 2 | 3  Next Page »



© 2024 Internet Business Systems, Inc.
670 Aberdeen Way, Milpitas, CA 95035
+1 (408) 882-6554 — Contact Us
ShareCG™ is a trademark of Internet Business Systems, Inc.

Report a Bug Report Abuse Make a Suggestion About Privacy Policy Contact Us User Agreement Advertise