AI Explosion: How Solution Providers Are Cashing In Big

As artificial intelligence gets built into a wide range of products and market segments, savvy solution providers are rapidly scaling their practices across the stack and creating booming profit centers.

ARTICLE TITLE HERE

In just the past few years, artificial intelligence has gone from cool tech to big business, exploding into a multibillion-dollar market teeming with opportunity for solution providers of all stripes.

For a wide array of solution providers, including systems integrators, hardware resellers, custom solution consultancies, cloud implementers and MSPs, AI practices have emerged as booming profit centers.

“We can see the demand for AI growing faster and faster each month,” said Martin Sutton, commercial director at Peak, a software and services provider based in Manchester, England, that focuses exclusively on AI.

id
unit-1659132512259
type
Sponsored post

The company has seen its recurring revenue triple over the last year and now plans to expand into the U.S. as retailers clamor for solutions that personalize their engagements with customers. To keep up, Peak’s team has grown from 20 people to 85 people in the past 18 months, with recruitment an ongoing priority.

[RELATED: 10 Cool Platforms For Automating Machine Learning]

The AI opportunity is growing even faster than cloud itself.

Research firm IDC, in a recent study, estimated global spending on the broad set of products falling under the AI rubric would scale to nearly $36 billion this year—up 44 percent from 2018 and showing no signs of slowing down. (Spending on cloud infrastructure services, in comparison, grew 42 percent year over year in the first quarter of 2019, according to Synergy Research Group.)

The market for AI services—both IT and business process—will nearly achieve parity by the end of the year with the $13 billion spent on AI-enabling hardware, according to IDC.

The market driver behind that spectacular growth is simple— AI, powered by self-learning algorithms, can be infused into almost any application, in nearly every industry, to deliver unprecedented levels of automation, business insight and efficiency.

“Sometime over the next decade, all enterprise applications will be AI applications,” said David Schubmehl, a research director at IDC who contributed to the study. “It’s definitely trending that way right now.”

The early adoption phase passed fast. Enterprise buyers across industries now see AI as an essential technological differentiator, said Gayle Sirard, who leads Dublin, Ireland-based professional services powerhouse Accenture’s applied intelligence group for North America.

“Many of our clients are adopting it because they have to,” Sirard told CRN. “Three, four years ago, everything was proof-of-concept-led in a few siloed industries. If you look at our book of business today, it’s in every industry and every geography.”

To meet that demand, Accenture’s applied intelligence group staffs more than 20,000 employees, 6,000 of whom have deep artificial intelligence and data science skills.

Realizing AI’s Promise

The intelligence explosion now sending shockwaves through the channel has been building for several years.

While the underlying technologies of machine learning and deep learning have existed for decades, they were long relegated to sci-fi lore or bogged down in mathematical complexity. Solution providers, whose businesses depend on solving real-world enterprise problems, often lacked interest.

That didn’t change much in 2011, when IBM in many ways introduced to the world the modern notion of AI by pitting its Watson cognitive system against human contestants on the “Jeopardy!” trivia show.

But by 2015, the rapidly maturing technology demanded attention across the IT industry and the channel.

With advances like public cloud democratizing access to greater computational power, new machine-learning frameworks and libraries, smarter compilers, and an emerging set of pretrained models and out-of-the-box Software-as-a-Service solutions, enterprise businesses could finally realize the promise of AI.

“Organizations started playing around with this and experimenting over the past four, five years,” said IDC’s Schubmehl. “And in the last two years, organizations have actually seen some of their experiments really pay off. As such, it’s really begun to take off.”

Solution providers that came in on the vanguard often leveraged expertise from their data analytics practices and played to their traditional strengths, whether it was reselling hardware, building custom solutions or implementing cloud-based applications.

But even those early to the game still struggle to overcome stumbling blocks—chief among them a shortage of data scientists and customer unease.

“People are still afraid of AI,” said Mike Trojecki, who joined New York-based Logicalis, a subsidiary of South Africa’s Datatec, as vice president of IoT and analytics a year ago. “That’s why we don’t want to make this too complicated.”

Maven Wave, a Google partner based in Chicago, built a solution for the American Automobile Association that optimized the placement of tow trucks and predicted call volumes and arrival times.

“They didn’t trust it at first,” said Brian Ray, Maven Wave’s managing director for data science. “That’s true for all this AI— people don’t trust it until they’ve experienced it.”

For that reason, there’s an art to selling AI. “We don’t go say, ‘Here’s machine learning, do you want to buy some?’” Ray said.

Instead, a successful engagement requires the ability to discover customer problems, identify solutions to remediate them, deploy those solutions and then teach customers to use them as intended, which, uniquely to AI, involves a continual process of refining the system with new data.

A solution provider can then typically point to the value in concrete dollar terms, Ray said.

“It’s easy to prove it’s working because you can compare actuals with what was predicted,” Ray said. “That makes it very easy for us to sell more of it.”

Fueled by its AI engagements, Maven Wave’s data science practice now generates more than 20 percent of the company’s total revenue and is growing by more than $5 million per quarter.

Mastering the business dynamics is critical to the success of an AI practice.

“Machine learning lives on Python, but AI lives on PowerPoint,” said Tony Ko, managing director for data and analytics at Slalom Consulting, Seattle. “If you’re not able to make the business case, you’re not going to be able to code.”

Hard Intelligence

Many solution providers have dived into AI by reselling the hardware that enables enterprises to deploy the technology on-premises.

“That’s where it starts—putting the right hardware in a data center,” said Logicalis’ Trojecki.

Those practices typically revolve around the GPU.

Graphics processors were once mostly of interest to gamers and visual effects creators. Now the devices are essential AI infrastructure with their ability to crunch numbers in parallel, allowing developers to offload from CPUs the intense computation needed to execute machine-learning models.

That development has benefited Nvidia maybe more than any other hardware vendor, reflected in a roughly 10-fold surge in the Santa Clara, Calif.-based GPU pioneer’s stock price over the past few years.

Logicalis recently struck an alliance with Nvidia to supply chips for the IBM, Hewlett Packard Enterprise and Cisco Systems servers it delivers to enterprises, Trojecki told CRN.

“We see them as leading the market right now,” he said. “We look at going in there with Nvidia, HPE, Cisco or Intel and bringing in that ability to put in a neural network to create these learning models.”

Logicalis is leveraging those partnerships for use cases around clinical imaging, predictive maintenance in manufacturing, and smart city applications at intersections and roadways.

“We’re seeing the start of the hockey stick,” Trojecki said. With a team that’s been together less than half a year, Logicalis’ IoT and analytics practice has generated a pipeline close to $45 million.

And that’s just scratching the surface. In another five years, Trojecki predicts IoT, AI and analytics will represent roughly 40 percent of Logicalis’ total revenue.

Solution providers specializing in high-performance computing (HPC) are also seeing their practices supercharged by the current heyday of the GPU and the AI solutions those chips optimize.

Five years ago, GPUs accounted for less than 5 percent of total HPC revenue at Nor-Tech, a Maple Grove, Minn.-based reseller and integrator of high-end systems. Now, thanks to machine learning entering the mix, it’s closer to 30 percent, said Dominic Daninger, Nor-Tech’s vice president of engineering.

Nor-Tech sells Super Micro servers powered by Nvidia GPUs to customers like a University of Wisconsin lab burying sensors under Antarctic ice to watch for neutrinos. Other more conventional customers include manufacturers using similar infrastructure to simulate product functionality.

“It’s gotten a lot cheaper to do that. We’re seeing AI just start to move into that,” Daninger said. “AI wasn’t even thought about seven, eight years ago [for those use cases].”

But hardware progress doesn’t stop at the GPU. Chips are becoming more specialized to accelerate the training and inference processes of machine-learning workloads.

The next frontier looks to be field-programmable gate arrays—devices with programmable circuitry.

The FPGA acts “like a GPU as far as offloading some of the work from the CPU,” Daninger said. “But it can be targeted at things real time. You can reprogram it to do really niche types of AI work.”

Nor-Tech sees some of its university customers sampling the technology, but so far, selling FPGAs is a small practice.

That market will surely grow, however, with the widespread introduction of autonomous vehicles and image and object recognition in retail environments, he said.

The next generation of AI technology will likely apply GPUs mostly to batch processing, while FPGAs handle real-time data streams, he said.

There’s no “killer app” for FPGAs yet, Daninger said, although autonomous vehicles might be the first.

Intel has invested heavily in that area. The Santa Clara-based chipmaker acquired FPGA-standout Altera in 2015 for $16.7 billion, then its Programmable Solutions Group set about building a software stack to ease deployment of the technology.

Intel, as well as competitors Nvidia, Qualcomm and IBM, are taking specialized circuitry a step further by bringing purposebuilt AI chips to market.

Their fiercest competition on that front might not come from each other, however, but from the hyper-scale clouds.

Mountain View, Calif.-based Google kicked off that race by introducing the Tensor Processing Unit, a processor custom-designed to run TensorFlow, its homegrown machine-learning framework. The third-largest cloud provider later added Edge TPU chips to power workloads that run in the field.

Not to be outdone, AWS, Seattle, plans to bring to market Inferentia, a chip engineered specifically to make predictions from pretrained models—the “inference” part of machine learning. Microsoft, Redmond, Wash., also appears to be hiring engineers to build a custom AI chip for its Azure Cloud division.

Libraries, Frameworks, Platforms, APIs

Machine learning leapt forward at the start of the 2000s, when a software library called Torch provided deep learning algorithms that developers could embed into their applications and train with their own data.

The modern version, PyTorch, is one of many such frameworks, from Caffe, to Apache MXNet, to Keras, to TensorFlow, originally developed at Google Brain.

Those frameworks speed AI development. But to use them effectively, it still helps to be a data scientist, or have a few sitting around.

Slalom Consulting had that advantage from a data science practice it transitioned to a full-fledged AI shop building custom solutions using MXNet and TensorFlow.

Once clients saw the success of early experiments, the business caught fire. In the past two years, Slalom notched a whopping 700 percent growth in its analytics practice, Ko said.

Pythian, an Ottawa-based consultancy known for data analytics expertise, got its AI practice off the ground in 2014.

Alex Gorbachev, Pythian’s Cloud CTO, took over the division two years ago and scaled from a few people to a half-dozen engineers specializing in machine learning.

“We focus on building a holistic AI solution,” Gorbachev told CRN. “We analyze what [customer] needs are when they have ideas about what they want to do with ML [machine learning], but they’re not there yet, and narrow it down to a few use cases that are appropriate to work on.”

Engagements typically either start from scratch or involve operationalizing on Google Cloud Platform experimental models built by customers.

Pythian engineers initially used the Theano and Keras frameworks for training neural networks on GPUs. They now opt for TensorFlow, combined with tools from Google like BigQuery, Cloud Dataflow, and managed Kubernetes.

Peak has gone as far as building its own machine-learning library for the “Peak AI System”—a cloud platform deployed on AWS that supports PyTorch, TensorFlow and other frameworks and statistical methods, and selects models for desired customer outcomes.

That system has helped retailers and e-commerce businesses realize 30 percent uplifts in revenue by engaging in personalized conversations with customers about relevant products at the time they are most likely to buy them. It also helps merchandisers predict when they should reorder specific SKUs.

“The ability to do this using AI leads to fewer errors, which results in higher sales volumes and greater profits,” Sutton said.

Slalom, Pythian and Peak all have in their wheelhouses the data science talent needed to select self-learning algorithms from preferred machine-learning frameworks, integrate them into code, cleanse data, properly train models on that data, and provision appropriate storage and compute infrastructure.

But most in the channel wouldn’t know a linear regression model from a random forest from a recurrent neural network.

To bring down the barrier to entry, many ISVs and the hyperscale clouds have introduced machine-learning platforms that abstract away from the underlying frameworks and automate deployment. They offer hosted libraries, suggest models, and streamline the process of preparing and feeding data into them, as well as ongoing management.

Slalom now uses the AWS SageMaker platform to deploy machine-learning models into production, Ko said. The consultancy also leverages AI development products from Microsoft Azure, Google Cloud Platform and ISVs like Alteryx, DataRobot, Databricks and Omniscience. With that toolkit, Slalom has built systems that analyze mammograms for breast cancer, look at drone images to predict fire risk, and optimize a car rental company’s fleet usage. Pythian typically relies on Google Cloud ML Engine to automate the process of designing, training and deploying models. That platform also offers pretrained algorithms that can be accessed through APIs like Google Cloud Vision.

“Those are very helpful,” Gorbachev said, and they’re used “more and more when the problem space is well defined and well known.”

Google, Microsoft, IBM and AWS all provide APIs for tackling common use cases like video recognition, language translation and speech-to-text—delivering AI while hiding away the inner workings of machine learning.

But experienced partners warn those don’t eliminate the need for data scientists. Experts must still design a solution, decide if a pretrained algorithm is appropriate for the use case, and ensure the model is performing appropriately.

“We’re certainly not at a point where you can put data in a bucket, push a button, and have an answer come out,” said Paul Spiegelhalter, Pythian’s lead data scientist.

So far, Pythian only sees a small portion of its engagements solvable without customization for specific client domains.

Pythian used Google Cloud Vision to start building a solution that parses video footage of cricket matches for the critical seconds of action. The pretrained models classified scenes, delivering a proof of concept.

Then Pythian created more precise labels for classification and turned to TensorFlow to train specialized models.

“There’s a whole range of solutions that can mix and match some portion of prebuilt things and some custom things,” Spiegelhalter said. “The true solution we end up using is usually somewhere in between.”

Intelligence Out of the Box

Building custom AI solutions using code libraries or automated platforms has propelled many partner practices. But it’s no longer a mandatory capability for delivering recommendations, forecasts, insight, language interfaces, object classifications, and security automation to customers.

Most SaaS vendors have now embedded AI directly into their application suites, empowering their partners to bring to market solutions without proficiency in the intricacies of machine learning.

Oracle and SAP have enhanced their ERP suites with AI; Microsoft and Google have peppered machine learning throughout Office 365 and G Suite. An expanding number of ISVs offer stand-alone products for automating call centers, sales enablement, threat intelligence, supply chain optimization and other functions.

Even solution providers with data scientists on hand, like Washington D.C.-based Salesforce partner Acumen Solutions, are taking advantage of the burgeoning array of off-the-shelf AI.

Acumen’s AI practice was just taking off in 2015 when Salesforce debuted Einstein and integrated a broad set of that platform’s machine-learning features across its SaaS portfolio, David Marko, Acumen’s managing director for on-demand analytics, told CRN.

Einstein made it easier to deliver “plug-and-play” types of solutions, like lead and opportunity scoring, in a way both invisible to the customer and integrated into the environment they used every day.

“In cases where we would have to spend weeks to build a custom model for our customer, I can build that within 20 to 30 minutes with Einstein if I have the right data being fed into it,” Marko said.

Being able to get something up and running quickly and at low cost “helps open the eyes and the mind-set of an organization,” he said. “It does accelerate the adoption and the awareness by helping customers understand what’s possible using AI.”

Einstein features in Salesforce allowed Acumen to move away from some of the open-source tools it once used to fulfill common tasks.

That’s why pre-packaged AI is useful and proliferating, Marko said. But it still doesn’t substitute for the aptitude of data science professionals.

“One set of questions from a customer leads to a deeper set of questions,” he said. “With the packaged approach, you’re going to be very limited in terms of the value you can provide to a customer.”

Most organizations still require functionality unique to their businesses. The ratio of out-of-the-box to custom AI might evolve to 80-20 as richer, more-powerful packaged solutions come to market, Marko said. Currently, however, the ratio is closer to 50-50, he added.

Acumen discovered the winning formula is to start with prebuilt tools and use them as a springboard to enrich models with customized efforts. “To go deeper and solve more complex problems, you use one to get to the other,” Marko said.

Acumen has combined Einstein with customization to help a large manufacturer predict customer churn and a talent management agency recommend applicants to clients.

“Prepackaged components are part of the overall puzzle, but not the only thing, and I don’t see it being the only thing for some time to come, if ever,” Marko said.

Intelligent SaaS means it’s no longer necessary for enterprises to engage data scientists to get started in AI.

“You can buy off-the-shelf and have a product that meets your immediate needs,” said Slalom’s Ko.

But that usually proves a short-term fix.

“Often the product’s road map won’t map to the customer’s road map, and custom tweaks will become necessary,” Ko told CRN.

Given how rapidly AI tools and technologies are advancing, however, solution providers can’t afford to ignore prepackaged solutions either, said Accenture’s Sirard.

“There’s not a single company that wants to reinvent the wheel,” she said.

As off-the-shelf AI eats into the market for custom solutions, the channel’s role won’t diminish.

Enterprises will increasingly look to professional services organizations staffed with data scientists to provide in-depth knowledge of industries, regulatory standards, and the latest-and-greatest technologies, according to Sirard.

“All of the big players have some interesting and unique capabilities,” Sirard told CRN. “It’s important understanding the differences across them and how you leverage those ecosystems to get the most value.”