Intel Launches 3rd-Gen Xeon Scalable CPUs: For 4, 8 Sockets Only

'Certainly in the segments that we compete in, it's really, really niche,' one Intel partner says of four- and eight-socket servers, which are the only server types being supported with the new Xeon Scalable processors. 'We just see far more opportunities with the two socket.'

ARTICLE TITLE HERE

Intel is taking a bifurcated approach with the launch of its new third-generation Xeon Scalable processors, releasing 14-nanometer parts first that only target four- and eight-socket servers and then dropping 10nm SKUs for one- and two-socket servers later this year.

The Santa Clara, Calif.-based company announced the new third-generation Xeon Scalable processors, code-named Cooper Lake, on Thursday, pitching the 11 SKUs as a good fit for data-intensive artificial intelligence workloads, in part because of a new instruction set that boosts inference and training.

[Related: Top Intel Chip Design Exec Jim Keller Abruptly Resigns]

id
unit-1659132512259
type
Sponsored post

The new processors were announced alongside other data center products, including second-generation Optane persistent memory, new Intel SSDs and a new AI-focused FPGA.

While Cooper Lake is bringing new performance gains over Intel's second-generation Xeon Scalable processors, it represents a limited market opportunity for channel partners. That's because four- and eight-socket servers only made up a mere 5.1 percent of the x86 server market in the United States and Canada last year, according to research firm IDC.

One- and two-socket servers, on the other hand, present a bulk of the opportunities for partners, having represented 13 percent and 81.8 percent of the x86 server market last year, respectively, in the U.S. and Canada, according to IDC. Sebastian Lagana, a researcher at IDC, said two-socket servers have historically served a "bulk of compute needs for most enterprises."

"Certainly in the segments that we compete in, it's really, really niche," said Dominic Daninger, vice president of engineering at Nor-Tech, a Burnsville, Minn.-based Intel system builder partner that focuses on high-performance computing. "We just see far more opportunities with the two socket."

An executive at another Intel system builder partner, who asked to not be identified, agreed that four- and eight-socket processors are a niche market and said the higher core counts of AMD's EPYC processors for one- and two-socket servers are one reason that will continue to be the case.

"The core counts got so high with AMD EPYC dual socket, where you can get 128 cores in two chips. That's why nobody did a four-socket version," the executive said.

While Cooper Lake may represent a smaller opportunity than previous Xeon Scalable launches, Intel said the new processors have already received support from hyperscalers like Alibaba Cloud, Baidu, Facebook and Tencent Cloud. Several OEMs are also releasing servers supporting Cooper Lake, including Gigabyte, Hewlett Packard Enterprise, Inspur, Lenovo and Supermicro.

Cooper Lake Specs, Features And Pricing

The 11 new Cooper Lake processors cover nearly the entire stack of Xeon Scalable processors, from Xeon Platinum to Xeon Gold, supporting up to six channels of DDR4-3200 memory with ECC support, up to 3.1 GHz in base frequency, up to 4.3 GHz in single-core turbo frequency and up to 38.5 MB of L3 cache. Other features include PCIe 3.0 connectivity, Intel Ultra Path Interconnect support, Intel Turbo Boost 2.0 Technology and Intel Hyper-Threading Technology.

The headline feature for Cooper Lake, however, is an expansion of the processors Deep Learning Boost capabilities that were introduced with second-generation Xeon Scalable. The new processors will support an additional instruction set for built-in AI acceleration called bfloat16, which Intel said can boost training performance 1.93 times and inference performance 1.9 times compared to previous generation processors performing single-precision floating point math, also known as FP32.

"It allows [customers] to achieve improved hardware efficiency for both training and for inference without needing a tremendous amount of software work as well, which can often be the barrier to unlocking more AI performance," said Lisa Spelman, vice president and general manager of Intel's Xeon and Memory Group, in a briefing with reporters.

Spelman said the chipmaker's ongoing AI integration work for Xeon Scalable is allowing the company to "meet more and more of the market needs from cloud to edge."

"You'll see bfloat6 being enabled on Xeon for our leading [independent software vendors], our cloud service provider customers like Alibaba Cloud and Tencent, and the over 500 companies that members of our AI Builders community [who] will be adopting as well," she said.

Daninger, the executive at Intel Partner Nor-Tech, said specialized AI features like Deep Learning Boost is one way the chipmaker has kept the heat on its CPU rival AMD.

"Intel's really good about blending in some specialty instructions for AI that AMD may not have, so there could be some things that will make it very attractive there," he said.

Another noteworthy technology is Intel Speed Select Technology, which is available on three of the new Xeon Gold processors and allows users to maximize performance on high priority workloads by giving them control over the base and turbo frequencies of specific cores.

Like previous generations of Xeon Scalable, Cooper Lake comes with three large memory capacity processors that support up to 4.5 TB in maximum DRAM memory per socket while the eight remaining processors support a maximum of 1.12 TB in DRAM memory per socket.

Daninger said while his company, Nor-Tech, hasn't seen a lot of demand for four- and eight-socket processors, large memory capacity is one major reason customers would ask for them.

"Massive amounts of memory," he said.

Systems running Cooper Lake processors can also receive a memory boost from Intel's new Optane Persistent memory 200 Series, the second generation of the company's new memory tier that combines the persistent qualities of storage with performance that nearly rivals DRAM. The chipmaker said the Optane, which is available in 128 GB, 256 GB and 512 GB modules, can provide more than 225 times faster access to data versus a mainstream NAND SSD made by the company.

The new Platinum processors consist of Xeon Platinum 8380HL (2.9 GHz base, 4.3 GHz turbo, 28 cores, 56 threads, 250W thermal design power, $13,012 recommended customer pricing), Xeon Platinum 8380H (2.9 GHz base, 4.3 GHz turbo, 28 cores, 56 threads, 250W TDP, $10,009 RCP), Xeon Platinum 8376HL (2.6 GHz base, 4.3 GHz turbo, 28 cores, 56 threads, 205W TDP, $11,722 RCP), Xeon Platinum 8376H (2.6 GHz base, 4.3 GHz turbo, 28 cores, 56 threads, 205W TDP, $8,719), Xeon Platinum 8354H (3.1 GHz base, 4.3 GHz turbo, 18 cores, 36 threads, 205W TDP, $3,500 RCP) and Xeon Platinum 8453H (2.5 GHz base, 3.8 GHz turbo, 18 cores, 36 threads, 150W TDP, $3,004 RCP).

The new Gold processors consist of Xeon Gold 6348H (2.3 GHz base, 4.2 GHz turbo, 24 cores, 48 threads, 165W TDP, $2,700 RCP), Xeon Gold 6328HL (2.8 GHz base, 4.3 GHz turbo, 16 cores, 32 threads, 165W TDP, $4,779 RCP), Xeon Gold 6328H (2.8 GHz base, 4.3 GHz turbo, 16 cores, 32 threads, 165W TDP, $1,776 RCP), Xeon Gold 5320H (2.4 GHz base, 4.2 GHz turbo, 20 cores, 40 threads, 150W TDP, $1,555 RCP) and Xeon Gold 5318H (2.5 GHz base, 3.8 GHz turbo, 18 cores, 36 threads, 150W TDP, $1,273 RCP).

Ice Lake Expands Third-Gen Xeon Scalable Later This Year

While Intel is starting third-generation Xeon Scalable with only four- and eight-socket processors, the lineup will receive one- and two-socket models later this year thanks to the company's long-anticipated 10-nanometer Ice Lake processors.

During Intel's briefing with reporters earlier this week, Spelman, the head of Intel's Xeon and Memory Group, reiterated the chipmaker's plans to release the Ice Lake server processors later this year as part of the third-generation Xeon Scalable processor family.

Cooper Lake was originally supposed to include processors for one- and two-socket servers, but as CRN reported earlier this year, the company ditched those plans in response to discussions with customers as well as the "continued success" of Intel's second-generation Xeon Scalable processors, which were recently expanded with new Cascade Lake Refresh parts, and demand for Ice Lake.

"We actually at first had looked at and talked to our customers a lot about doing Cooper Lake top to bottom," Spelman said. "But then as we moved further along towards production and talked through with them on the workloads, the use cases and the timing for ice Lake, we just felt that it was more congestion in the roadmap, and we felt like the work that we did with Cascade Lake Refresh solved and helped meet a bunch of those market needs that we would have been trying to address."

Because Cooper Lake's one- and two-socket support was tied to Intel's Whitley server platform, which the company only plans to use for Ice Lake now, the chipmaker said there will be no platform compatibility between Cooper Lake and Ice Lake, as the company had previously promised.

The decision to not release one- and two-socket Cooper Lake processors also meant that a previously disclosed 56-core variant is no longer coming to market. Intel had previously touted the 56-core Cooper Lake processor as a way to provide a wider audience with a higher-core count processor since the only 56-core processors Intel currently offers are available in pre-configured systems.

As for what kind of speeds and feeds Intel will offer with Ice Lake, many details remain under wraps.

However, Spelman disclosed that the company plans to launch its next-generation Xeon Scalable processors, code-named Sapphire Rapids, next year for servers ranging from one to eight sockets. She said the company, even with many architects and engineers working from home, has made good progress, even reaching the first "power on" for the new processors, a significant milestone.

Spelman said Sapphire Rapids will further expand the AI capabilities of Xeon Scalable processors with a new instruction set called Intel Advanced Matrix Extensions, or AMX for short.

"It will further increase the training and the inference performance as well and continue to improve that total cost of ownership for our customers," she said.