10 Solution Providers On Protecting Against Gen AI Security Risks

Executives from solution and service providers shared the ways they’re helping customers to reduce the risks posed by the use of generative AI tools such as ChatGPT.

Gen AI And Security

Last week, CRN’s reporting team spoke with executives from numerous solution and service providers at XChange August 2023 about generative AI and the security risks it can pose to customers. Ten executives shared details on how they’re helping customers to safely harness the technology and reduce the risks posed by ChatGPT and other gen AI tools.

[Related: 20 Hottest New Cybersecurity Tools At Black Hat 2023]

XChange August 2023, which was hosted in Nashville, Tenn., by CRN parent The Channel Company, brought a major focus on cybersecurity with numerous sessions focused on the topic, and the impacts of generative AI were a frequently mentioned issue.

What follows are comments from 10 solution providers on protecting against gen AI security risks.

Atul Bhagat

President, CEO

BASE Solutions

I think it can be a security risk if it’s not managed properly. I think one of our responsibilities as MSPs is we have to get ahead of it and have those conversations early on with our clients about how to use AI safely and correctly. We’ve heard about mistakes and horror stories. But overall, I think generative AI is the future and we’re seeing in a lot of organizations they’re trying to find ways to use it to their advantage. Everyone wants a competitive advantage. And so we’d be remiss not to push it. But we do have to educate ourselves as well as our clients on how to use it effectively. We’ve already started because we had some clients engage us early on, who were really pushing the bar and said, ‘We really want to use AI in every way possible.’

Reagan Roney

Chief Experience Officer

Solvere One

It needs guardrails. I think the most important thing is to accept that it’s coming and that it actually can be used for good. Just like any other tool, it has its positives and its negatives. But as long as we can put policies in place and create the training around it that’s necessary—as well as putting the caps and limitations around the technology—I think we’ll be all right. But education is going to be the key on this. Because we’ve already seen where people put too much information in these tools, and they just kick it out to the ether not knowing that now it’s in the public domain. And that’s where we run into issues. But I think in the long run, it’s going to prove to be a good tool. And as Microsoft and everybody else integrates it, it’s going to be part of our life.

Dawn Sizer


3rd Element Consulting

Generative AI can present a security risk if you are not thinking through the information you are sharing with the AI. Building a use case policy for the AI and addressing the risk, ethics and security is a must before adopting AI and associated technologies.

Zac Paulson

Director, Product, Strategy

ABM Technology Group

My main concern is what rights exist once that content is created. Is it the customer’s? Or is it the AI engine’s? Or both?

Another worry I have is the impact it has on the hackers using it to create very well-written phishing emails. At this point, we are simply recommending caution with our clients and employees.

Michael Goldstein


LAN Infotech

I definitely think that it’s causing customer risk. I don’t think people understand as they’re utilizing products like ChatGPT. The biggest piece they don’t understand is that we’re the hamster in the cage. They don’t understand that that data is being used in other sources. We’re doing a lot of education on it. We’re making sure that customers understand that you really should have a plan. You should really try to see what problem AI is looking to solve versus using AI to solve something that really isn’t a problem.

Michael Villa

Executive Project Manager

VIA Technology

It definitely presents a security risk to our customers. The short answer is, no, we’re not currently helping our customers utilize generative AI, mostly because the business problems that we’re presented with [due to] AI are more user error. That’s why we’re not really touching artificial intelligence yet.

Jason Wright


Avatar Computer Solutions

I think it does pose some security implications. I’m not even sure myself, as an MSP owner, what risks it brings into our business and potential risks that we’ve introduced into our clients. With things like ChatGPT, the better you get at the command prompts the better it does the search out of the information, and that can be dangerous in terms of a company’s IP. I think we’re still in the exploratory phase and we’re trying to figure some of that out ourselves.

Keith Nelson


Vistem Solutions

Right now we advise them not to use it because it doesn’t warn you that all the data belongs to the AI company, not them. We’re actually trying to convince our customers to let it mature a bit and let it become more of an ownership model as it’s really a beta model in every format.

Frank Huston


Essential Net Solutions

One of the things that we’ve been doing a lot internally lately is helping clients figure out what AI means to them. For us, I think we need to spend some time taking it into our business. I give presentations to our clients on what is ChatGPT because they’re all asking about it. Some of them are scared. Some of them aren’t. We’ve helped our clients. But in all fairness, we haven’t done enough in our own business. It’s just been us responding to client questions because they keep hearing about it. Some of our clients are in academia, so they are immediately opposed to it because kids are using it to get past tests, and I’m like, ‘OK, I get it.’ But with every good thing comes some bad.

Hoss Milani


Final Frontier

AI is becoming actually very effective in what it does. We recently saw on MSN a story about a young woman, 21 years old, supposedly born in Romania. From time to time, she posted on social media about herself, and pictures of herself from Romanian beaches or Greek islands. The reality is that she’s not real. She actually is an AI creation. But she has 100,000 followers who believe she’s real. And I use that as an example to people just to help them understand how effective AI has become. And the U.S. military is using AI very effectively, and we don’t know a whole lot of how it’s being used. …

The thing is, China is fairly ahead of us in terms of what they’re doing with AI technology. And we need to catch up on the commercial side of it. It is becoming an effective way of managing a lot of stuff in terms of what people are doing. The forms of jobs that people are doing will change in the not-too-distant future using AI. So that’s the kind of conversation that we’re having with people who think AI is a fad and might just go away. Not everybody sees it as a major threat yet to their businesses. But I see something drastic [happening to] the power of AI.

AI can also do good. The other day I watched a report on a Korean woman who, because of COVID, closed her restaurant in Manhattan. She had to lay off a lot of people, but then she used AI to understand what people wanted and what she was actually selling. She realized that what she was actually selling was not really what everybody wanted. She wanted to get her customers back, so she used AI to understand what they wanted. She started customizing food for different people. And now her restaurant is growing through rapid changes. So I’m looking at different examples of what is going on in AI and pointing out to customers that how they can use it.