Intel Sets Ice Lake Apart From AMD EPYC With AI, Systems Approach

The chipmaker launches its long-anticipated third-generation Xeon Scalable CPUs, which feature up to 40 cores and come with new acceleration capabilities for AI and cryptography as well as new security features. Intel says the new acceleration improvements make Ice Lake faster than AMD’s new third-generation EPYC chips across certain workloads for AI, cloud and high-performance computing.

ARTICLE TITLE HERE

Intel is fighting against the rise of AMD’s EPYC processors with a new line of Xeon Scalable processors featuring dedicated acceleration for AI and cryptography, but the semiconductor giant said the bigger advantage comes from its wider product portfolio and software investments.

The Santa Clara, Calif.-based company Tuesday revealed its new third-generation Xeon Scalable CPUs, code-named Ice Lake, the first server processor line to use Intel’s troubled 10-nanometer manufacturing process that faced several years of delays.

[Related: 4 Big Changes Coming To Next-Gen Intel CPUs]

id
unit-1659132512259
type
Sponsored post

The new Ice Lake processors bring Xeon’s maximum core count to 40 from the previous generation’s 28. They also come with new workload accelerators for AI and cryptography, and they support Intel Software Guard Extensions and Total Memory Encryption for making applications and systems more secure. In addition, they support 6 TB of memory capacity, between 4 TB of DRAM and 2 TB of Intel Optane Persistent Memory, eight channels of memory and 64 lanes of PCIe 4.0 connectivity per socket.

The processors will be made available in more than 250 server designs from OEMs and ODMs, including Cisco Systems, Dell Technologies, Lenovo, Hewlett Packard Enterprise and Supermicro. Intel said all the top cloud service providers, including Amazon Web Services and Microsoft Azure, will support the CPUs as well. The new lineup also has support from 15 major players in the network infrastructure space and from more than 20 labs and service providers in the high-performance computing (HPC) space.

One key phrase Intel is using to promote the new Xeon Scalable CPUs is “performance made flexible,” an emphasis on the lineup’s applicability to a wide range of workloads, including cloud computing, 5G network infrastructure, IoT, AI and HPC.

“From improving the performance of business applications that are run in the cloud to 5G network performance to bringing greater levels of compute out to that intelligent edge, we really see a huge amount of opportunity with third-gen Xeon Scalable, the processor and the platform, and we think that our customers are going to see this flexible performance really delivered for that as well,” said Lisa Spelman, corporate vice president of Intel’s Xeon and Memory Group.

The release comes after AMD last month launched its third-generation EPYC processors, code-named Milan, which the rival chipmaker has said is two times faster than Intel’s second-generation Xeon Scalable processors across key workloads. Intel has previously said its 10nm process is on par with TSMC’s 7nm process that AMD started using in 2019 for its EPYC CPUs. That two-year lead has given what AMD has considered a major advantage in its server chip lineup, helping grow its x86 server CPU market share to 7.1 percent against Intel at the end of 2020.

Now Intel is hitting back against AMD’s growing industry support, saying that the new Ice Lake CPUs are faster than its rival’s Milan chips for workloads that can take advantage of the new AI, HPC and cryptography accelerators only found within Intel’s new processors. Intel said the new gains were made despite AMD continuing to have a higher maximum core count of 64 for its EPYC processors.

“They really show the benefit that you don‘t necessarily need more cores. You can deliver even better performance with fewer cores with software that’s optimized for the workload acceleration instructions,” said Dave Hill, senior director of data center performance at Intel.

For HPC workloads, when compared with AMD’s new 64-core EPYC 7763, Intel said its new 40-core Xeon Platinum 8380 is 18 percent faster for the LINPACK benchmark used to measure the world’s top supercomputers. It’s also 27 percent faster based on a geomean of NAMD benchmarks used for life sciences. In the Relion benchmark for cryogenic electron microscopy, the processor is 32 percent faster. And in the financial services realm, it’s 50 percent faster for the Monte Carlo benchmark.

All of Intel’s advertised performance gains rely on the AVX-512 instruction set that has been present in the last two generations of Xeon Scalable processors.

For cloud workloads, Intel relied on Ice Lake’s new Crypto Acceleration instructions to deliver 103 percent and 220 percent performance gains against AMD’s top Milan CPU on public key cryptography workloads for OpenSSL ECDHE x25519 and OpenSSL RSA Sign 2048, respectively. As for web microservices, Intel relied on a mix of AVX-512 instructions and architectural improvements to deliver a 200 percent performance improvement using the CloudXPRT benchmark.

For AI workloads, Intel used a mix of its DL Boost instruction set and software optimizations to deliver a range of triple-digit performance gains for Ice Lake against AMD’s top Milan CPU in language processing (BERT-Large), object detection (SSD-mobilenetv1), image classification (ResNet50-v1.5) and image recognition (Mobilenet-v1) when performing the algorithms in real time. When those algorithms are run in batches, Intel said Ice Lake’s advantage grew to four digits across the latter three.

Spelman said AI is “one of the fastest-growing workloads on the planet” and that the improved AI acceleration capabilities of the new Xeon Scalable processors show they can serve as a “great alternative” to GPUs and other kinds of dedicated accelerators.

“While there‘s places where GPU acceleration can make sense and can help drive a workload to the next level, it is not required as a de facto standard in order to advance your AI initiatives,” she said. She cited Burger King as an example, saying that the fast-food giant is relying on a CPU server cluster powered by Xeon Scalable processors for a drive-thru recommendation engine.

Compared with Intel’s second-generation Xeon Scalable processors, the new Ice Lake chips are up to 50 percent faster for latency-sensitive workloads in the cloud, up to 62 percent faster for 5G network workloads, up to 56 percent faster for image classification inference, up to 57 percent faster for vaccine research modeling and up to 74 percent faster for language processing inference.

But Spelman emphasized that data center performance and efficiency are not just about the CPU. Servers also rely on other components, like solid state drives and memory, to improve the performance and total cost of ownership of data centers.

As such, Intel complemented the Ice Lake launch with new products in other categories across its portfolio: the Intel Optane Persistent Memory 200 Series, the Intel Optane SSD P5800X and Intel SSD D5-P5316 NAND SSDs, the Intel Ethernet 800 Series network adapters and the Intel Agilex FPGA, the latter of which is designed to provide adaptable performance for various workloads.

“We know that customers need more than just their CPU to manage their key workloads, their growth workloads, and we’re trying to take this more system-level solutions approach where you’ve got your CPUs, your XPUs, your memory investments, Ethernet, etc., the whole gamut to solve these increasingly complex problems that our customers have across their distributed environments,” Spelman said.

In one example of how Intel’s system-level solutions approach is a major advantage, Spelman said a system running virtual desktop infrastructure can provide 87 percent more virtual machines per node when Intel’s Xeon, Ethernet, Optane Persistent Memory and NAND SSD products are used in a system.

Many workloads also benefit from continuous software optimizations made by Intel, which can result in performance improvements by orders of magnitude. For example, with Oracle Exadata X8M systems, Intel said it has been able to lower latency for database reads by 10 times.

“A key change in focus and strategy for us is to build upon the performance we deliver as the product moves into deployment versus having our entire software team and our customer solutions team move on to the next generation,” Spelman said.

Another key differentiator for Intel, Spelman said, is the work the company does to optimize and verify server reference architectures for workloads with partners. The company recently told CRN that it had generated $1 billion in revenue from Intel Select Solutions, its server verification program, through June of last year and that over 150 verified partner solutions were available by the end of last year.

“We’re building the platform to deliver that flexible performance to our customers while at the same time enabling customers to deploy solutions quickly and at scale,” she said.

Alexey Stolyar, CTO of International Computer Concepts, a Northbrook, Ill.-based system integrator, said while he thinks Intel will continue to do well in latency-sensitive workloads, some of Ice Lake’s success will rely on the extent to which customers use software that is optimized for the new processor line’s accelerators for AI, HPC and cryptography.

“AVX-512, there’s not a lot of applications that use it,” he said, referring to one of the instruction sets that allowed Intel to claim certain performance advantages against AMD.

However, Stolyar said, it’s good for Intel to release Ice Lake and show that it’s making improvements across various features, from memory bandwidth to core count.

“I’m happy they finally launched it,” he said, adding that some customers are already interested, particularly for networking and latency-sensitive workloads.

But overall, Stolyar said, Ice Lake may be a harder sell due to the large improvements AMD has made with its latest EPYC processors.

“I think they‘re going to have a hard time with this generation,” he said. “I don’t think that many people are going to adopt it as they would want, and I think they know that.”

Dominic Daninger, vice president of engineering at Nor-Tech, a Burnsville, Minn.-based high-performance computing system integrator, said Intel’s AI enhancements, which stems from the chipmaker’s AVX-512 support, can make a major difference if software vendors support them. Some of his customers go out of the way to ask for Intel CPUs because of such features, he added.

“We’re seeing more and more of that show up,” he said, adding that some simulation and modeling applications take advantage of AVX-512 instructions.

However, Daninger admitted that the market for applications that can take advantage of Intel’s AI and HPC accelerators isn’t as big as it could be.

“It’s not an avalanche by any stretch,” he said, “but there’s more today than there was two years ago.”

Some HPC applications, like computational fluid dynamics, are more dependent on higher core counts, which is an area where AMD continues to have an advantage, Daninger said.

“Frankly, I wouldn’t be too surprised to see AMD gain some market share, but Intel still has huge majority over AMD at this point, from what everything I read,” he said.