Amazon's Werner Vogels On Reinventing Virtualization

At the re:Invent conference, Amazon's CTO offered a glimpse under the hood of the world’s largest public cloud, detailing how custom hardware was deployed to create instances that perform almost like bare-metal systems.

ARTICLE TITLE HERE

Amazon CTO Werner Vogels used his always-anticipated re:Invent keynote to detail the hardware and software innovations making the world's largest cloud more performant and secure.

Those efforts started years ago with reinventing virtualization—the technology that underpins modern cloud computing, Vogels told attendees of his Thursday keynote at the massive AWS conference in Las Vegas.

AWS has been building custom hardware and advancing novel software architectures to reduce latency, minimize resource overhead, and better isolate compute instances, he said.

id
unit-1659132512259
type
Sponsored post

[Related: AWS Unveils Retail, Public Safety and Disaster Response Competencies]

Virtualization "has sort of been the bread and butter of the compute part of any cloud environment since day one," Vogels said. And over time, AWS has "pushed the boundaries" of that technology.

The problem with traditional virtualization is that all guest operating systems compete for the same resources, which often results in noisy and sometimes unreliable compute environments.

"We started thinking how we can radically change this," Vogels said, as the "old-style virtualization" was really hampering the performance of applications built with modern software architectures.

AWS wanted to deliver to its customers the performance of bare metal in the cloud, minimizing as much as possible the resource drain of traditional cloud-building software.

The solution came from thinking about innovations Amazon achieved in developing software, then applying them to building new hardware.

"What if we take lessons from microservices, small building blocks, and apply those to the hardware world as well," Vogels told attendees. "Maybe we can change the world of virtualization."

That strategy led Amazon to form a partnership on the other side of the world.

The cloud leader enlisted Annapurna Labs in Israel to start working on Nitro, embedding networking into a separate card that would power a new class of C3 instances launched in 2013.

Annapurna's next assignment was offloading processing onto the Nitro card as well, leading to the C4 architecture.

The collaboration was such a success, Vogels said, that "Annapurna joined AWS."

After the 2015 acquisition of the Israeli chip designer, "we started working on C5, with the new goal of offloading I/O onto separate cards." AWS also set its sights on removing pieces of its hypervisor and placing them on Nitro as well, evolving that hardware component into a comprehensive system.

With many major components of the virtualization system now running on silicon, AWS infrastructure software designers were able to make EC2 instances leaner, more reliable and more secure by stripping the hypervisor to run just the bare minimum of required functionality.

That resulted in the next leap in functionality—a new generation of EC2 instances that achieved AWS' goal of performing "almost like bare metal," Vogels said.

The "hypervisor is so thin it barely affects guest OS," Vogels told attendees.

Offloading functionality onto hardware not only ramped performance, but also substantially upgraded security, limiting communication between components in a way that made it easier to block undesired processes and stymie bad actors.

"Nitro became a base for innovation," Vogels said, allowing AWS to "do lots of things we never could do before."

Delivering live software updates, from patches to new hypervisors, and creating new systems like the Outposts on-premises AWS service, are all possible because the Nitro platform supports the latest and greatest AWS compute environments, he said.

And Nitro allowed AWS to continue innovating up the stack, not only supporting virtual machines but also containers and serverless services.

That effort led to Firecracker, a "micro virtual machine" that is being employed to ramp the efficiency of the AWS Fargate service that powers container workloads.

The cutting-edge approach allows Fargate to better isolate customers by enabling each copy of an application consisting of containers to run "under the hood of virtualization,"' Clare Liguori, an AWS principal software engineer, said.

AWS initially used EC2 instances to isolate space for containerized workloads running in a serverless model on Fargate, but those instances were often "too heavy," Ligouri told attendees of the Thursday keynote.

The Firecracker technology of micro VMs was faster and more lightweight—a highly efficient container platform.

"As we run more of Fargate on Firecracker, higher density means better efficiency," Ligouri said.

Now AWS is taking that work a step further by enlisting the open-source community to drive progress, she said.

Amazon is currently developing a new data plane to run directly on Nitro with the core code open to developers on GitHub. The project leverages containerd, an open-source project that's part of the Open Container initiative.

Vogels noted that the AWS Lambda serverless system also runs on Firecracker, which will allow more important innovations going forward as the serverless approach is gaining traction in an unexpected market.

When AWS first launched Lambda, it expected serverless computing to appeal mainly to young, budget-conscious companies. That assessment proved incorrect.

"Rapid adoption of serverless is happening in the enterprise," Vogels said. "Enterprises are adopting serverless at tremendous speed."