Nvidia CEO Jensen Huang’s Top 5 Remarks From GTC 2022

Nvidia CEO Jensen Huang touts the ompany’s RTX 40-series GPUs, the Omniverse and other innovation at the online GPU Technology Conference.

Accompanied by a car racing video depicting all-terrain vehicles speeding and careening amid smoke, shadows, fire and dust, Nvidia CEO Jensen Huang touted the company’s RTX 40-series GPUs, the Omniverse, and state-of-the-art graphics at the online GPU Technology Conference (GTC) Tuesday.

The cars, which he nicknamed Racer X, featured a simulation of future games that will run on a single GPU.

“Future games will not have prebaked worlds. … Racer X is running on one single GPU,” he said.

Powered by Nvidia’s next-generation Ada Lovelace GPU architecture, the RTX 4090 will arrive on Oct. 12 and costs $1,599 while the RTX 4080, at $1,199, does not yet have a release date.

“Ada provides a quantum leap for gamers and paves the way for creators of fully simulated worlds,” Huang said. “With up to 4X the performance of the previous generation, Ada is setting a new standard for the industry.”

To make it happen, Huang said Nvidia engineers worked with chip contract manufacturing and design company Taiwan Semiconductor Manufacturing Co. [TSMC] to create the process optimized for GPUs.

“This process lets us integrate 76 billion transistors and over 18,000 CUDA cores, 70 percent more than the Ampere generation,” he said.

Here are five highlights from Huang’s keynote.

The Race Is On

Racer X is a fully interactive simulation built with Nvidia Omniverse. Racer X is physically simulated and the lighting, reflections and refractions are ray-traced. Nothing is prerendered and baked in. The parts and joints of the cars are individually modeled. Their physical properties affect driving dynamics. Things in the environment are not static props, but rigid body cloth and fluid simulations. Smoke, fire and dust are volumetric simulations. Racer X is a simulation. Future games will not have prebaked worlds. Future games will be simulations. Racer X is running on one single GPU.

Let me tell you how we did it. We introduced the programmable shading GPU nearly a quarter of a century ago. GPUs revolutionized 3-D graphics and created a medium with an infinite palette for artists. At the SIGGRAPH 2018 [ACM SIGGRAPH is an international community of researchers, artists, developers, filmmakers, scientists and business professionals with a shared interest in computer graphics and interactive techniques] we launched Nvidia RTX, a new GPU architecture that extended programmable shaders with two new processors. RT cores accelerate real-time ray tracing tensor cores[and] process matrix operations central to deep learning. RTX opened a new frontier for computer scientists and a flood of new algorithms have appeared. A new era of RTX neural rendering has started today.

GPU Named For Ada Lovelace

We’re announcing Ada Lovelace third-generation RT ... named after mathematician Ada Lovelace, who is often regarded as the world’s first computer programmer. Nvidia engineers work closely with TSMC to create the four-end process optimized for GPUs. This process lets us integrate 76 billion transistors and over 18,000 CUDA cores.

First, a new streaming multiprocessor with 90 teraflops over two times the previous generation, Ada’s SM includes a major new technology called shader execution reordering, which reschedules work on the fly, giving a two to three times speed up for ray tracing. SCR is as big an innovation as out of order execution was for CPUs. Second, a new RT core with twice the RE triangle intersection throughput, and two important new hardware units.

Now Comes DLSS

Each frame of a CGI [computer-generated imagery] movie takes hours to render. We want to do this in real time. Nvidia RTX opens the world to real-time ray tracing [the technique for modeling light transport for use in many rendering algorithms for generating digital images]. RT cores do BVH traversal [bounding volume hierarchy uses a short stack of just a few entries] and array triangle intersection testing which saves the SMC [system management controller] from spending thousands of instructions on each array. But even with RT cores, frame rates were too low for games. We needed another breakthrough.

Enter deep learning DLSS. [Deep Learning Super Sampling is a family of real-time deep learning image enhancement and upscaling technologies developed by Nvidia that are exclusive to its RTX line of graphics cards, available in many video games] DLSS is one of our greatest achievements. Ray tracing requires insane amounts of computation. DLSS uses a convolutional auto encoder AI model and takes the low- resolution current frame and the high-resolution previous frame to predict on a pixel-by-pixel basis. …The process is repeated tens of thousands of times until the network can predict a high-quality image.

Pushing State-Of-The Art Graphics Into the Future

In a modern game like ‘Cyberpunk,’ we run over 600 ray tracing calculations for every pixel just to determine the lighting. That’s a 16 times increase from the time we first introduced real-time ray tracing four years ago. Yet the number of transistors we have available in a GPU to do these calculations have not increased at that rate.

That’s the power of RTX. We can deliver a 16-times increase in four years with artificial intelligence. Some pixels are calculated, most are predicted.

Nvidia Omniverse Cloud

The Omniverse cloud connects 3-D artists, designers and metaverse viewers on every edge of the planet, and delivers the possibility to build and operate advanced 3-D internet applications on any device.

Today, were announcing Nvidia Omniverse Cloud and Infrastructure as a Service that connects Omniverse applications running in the cloud, on-prem, or on a device. In addition, replicator [cloud replication refers to the process of replicating data from on-premises storage to the cloud, or from one cloud instance to another] and farm will also be available in the cloud. Farm is a scaling engine for a render farm. Omniverse cloud replicator and farm containers are available on AWS today. We’re also offering them as managed services. Sign up today for early access.