Nvidia Launches AI Enterprise Suite In Exclusive Pact With VMware

Manuvir Das, Nvidia’s head of enterprise computing, says Nvidia AI Enterprise, which is being sold through channel partners, has been modeled after VMware’s vSphere and believes the software suite represents ‘billions of dollars of opportunity over time’ because of the expectation that most enterprises will adopt AI applications. ‘We almost want this to be as though Nvidia AI Enterprise is a new software stack created by some team at VMware,’ he tells CRN.

ARTICLE TITLE HERE

Nvidia believes it can build a “significant” business with the new Nvidia AI Enterprise software suite it’s launching on VMware’s vSphere virtualization platform as part of an exclusive agreement.

Made generally available on Tuesday, Nvidia AI Enterprise aims to “democratize” AI for the enterprise with a set of AI tools and frameworks, and it allows customers to virtualize GPUs for AI and data analytics workloads in existing data center infrastructure with VMware vSphere 7 Update 2. The software suite is integrated with software from MLOps vendor Domino Data Lab to help customers handle machine learning workflows, from data acquisition to application deployment.

[Related: Nvidia CEO: ‘The Next Major Wave Of AI’ Is In The Enterprise]

id
unit-1659132512259
type
Sponsored post

Nvidia has certified the software to run on mainstream GPU servers from Lenovo, Hewlett Packard Enterprise and other vendors in its Nvidia-Certified Systems program, which now includes Dell EMC VxRail, the program’s first hyperconverged platform that has been qualified for Nvidia AI Enterprise.

“The idea here is the same servers that have been racked and stacked into private clouds in enterprise data centers today can now be utilized for AI with a small amount of GPU added to the server: affordable, accessible, incremental cost,” said Manuvir Das, head of enterprise computing at Nvidia, in a pre-briefing. “So when you use the server with [virtual machines] that are not doing accelerated workloads, there’s no regrets. It’s pretty much the same server at incremental costs. And when you deploy VMs with accelerated computing, you get massive benefit.”

The Santa Clara, Calif.-based company previously told CRN that Nvidia has an exclusivity agreement with VMware for Nvidia AI Enterprise, meaning the software suite will not be available on any other virtualization platform for an undisclosed period of time.

Das said the chipmaker believes the new software suite “is a significant business for Nvidia going forward.” At an investor conference in June, he said Nvidia AI Enterprise could represent “billions of dollars of opportunity over time” because of the software suite’s licensing and support costs, which the company has modeled after VMware’s pricing model, combined with Nvidia’s belief that AI applications will become pervasive among enterprises.

“One of the reasons why we are so confident that the democratization of AI is to come is that to this point, companies that have adopted AI have done so because if you think about the core function of their business, they required AI to make the core function of their business better,” said Das, a former Dell EMC executive who also played a crucial role in the development of Microsoft Azure.

Subscription licenses for Nvidia AI Enterprise start at $2,000 per CPU socket per year, and the subscription includes standard business support five days a week. The software is also supported with a perpetual license of $3,595 per socket, but customers need to pay extra for support. For subscription and perpetual license customers that require support around the clock, Nvidia charges additional for a “24/7” option.

“If you’re a customer that has a model already for how you procure licenses for VMware vSphere or how you upgrade, Nvidia AI Enterprise works exactly the same way and so from a business process point of view, it’s also straightforward,” Das said.

Das said Nvidia partnered with VMware to develop Nvidia AI Enterprise because most enterprises run their line-of-business applications on vSphere as the “de facto operating system of the data center.”

In addition to mirroring VMware’s pricing for vSphere, Nvidia has “mimicked” other aspects of the virtualization company’s strategy for Nvidia AI Enterprise, including incentives such as margins for channel partners, Das said in an interview with CRN from earlier this year.

“We almost want this to be as though Nvidia AI Enterprise is a new software stack created by some team at VMware,” Das said in May. “So everything — the sales motion, VMware’s team, all their channel partners, the incentives they provide to their channel in terms of margin and discounting and [market development funds] and all those things — we’ve just mimicked.”

Selling Nvidia AI Enterprise through channel partners is an important part of the chipmaker’s strategy, and several partners across the world are selling the software from day one, including Carahsoft, Computacenter, Insight, NTT and SoftServe. The company is also selling through OEM and ODM partners, including Atos, Dell EMC, Gigabyte, Hewlett Packard Enterprise, Inspur and Lenovo.

Juan Orlandini, chief architect at Tempe, Ariz.-based Insight, told CRN that Nvidia AI Enterprise could become a “significant” business for the solution provider, which is ranked No. 14 in CRN’s 2021 Solution Provider 500 list. However, he said, the jury is still out because “it’s hard to model an emerging market.”

“I think it’s going to be good enough that our bet in training, learning, installing it evaluating it and all that will be well worth it,” he said.

Orlandini thinks Nvidia AI Enterprise will be a good fit for organizations that are getting started in AI because of how cost-effective it is and how it satisfies the needs of both data scientists and IT administrators, preventing “shadow IT” situations where data scientists buy their own workstations because IT administrators don’t let them run workloads in the cloud.

“Now with Nvidia AI Enterprise, you can take some of the IT costs, controls, governance scale, all those other things, and still provide that agility to that person that has this crazy idea that they’re trying to develop,” Orlandini said.

Customers may eventually choose to move AI workloads from mainstream servers running Nvidia AI Enterprise to Nvidia’s DGX systems or to GPU-powered cloud instances if they require more powerful configurations, particularly when it comes to larger training workloads, according to Orlandini.

But Orlandini said he sees long-term opportunities in Nvidia AI Enterprise for customers who need to continuously test new AI models and for inference workloads since they typically require less GPU horsepower than training.

“I think for many organizations that are really starting to grow their AI, data science practices, this is a very cost-effective way to do it. It gives you the ability to leverage a GPU that might be used for one functionality during the day and used for something significantly different during another part of the day,” Orlandini said. “It will also give you the ability to segment some of your work, so that you can spread the workload across multiple organizations that might be chipping in and pooling in for these things, so it gives you a lot of flexibility.”

Orlandini said Nvidia is “very channel-focused,” which is made evident through the margins partners can earn on Nvidia AI Enterprise sales if they are trained and register deals through the chipmaker’s partner program, Nvidia Partner Network.

“They do want their partners and their partner ecosystem to thrive, and to do that, they understand that they have to allow us the opportunity to make good margin,” he said. “They’ve proven that with the DGX programs that they’ve put in place, through the registration programs that they’ve put in place, even through some of the training requirements that we must go through now because what they want to do is make sure that not everybody can register a deal; you have to be a qualified partner to do these things. So they’ve done all the right things so that we can make good margin on their products and the services that we can build on top of.”