Leadership Operations Nov 26, 2024
Podcast: The AI Risks Your Business Should Avoid
On this episode of The Insightful Leader, why your trade secrets may not be safe, and other considerations.
Depending on the generative AI model you use, a simple prompt could be enough to jeopardize sensitive company data.
But that’s not the only harm AI stands to impose on businesses that aren’t careful, says Kristian Hammond, a computer science professor at Northwestern’s McCormick School of Engineering and director of the school’s Center for Advancing Safety of Machine Intelligence (CASMI). Hammond also helped start Kellogg’s MBAi program.
On this episode of The Insightful Leader: What leaders need to know.
Podcast Transcript
Laura PAVIN: You’re listening to The Insightful Leader. I’m Laura Pavin.
I’m not sure if you’ve noticed, but we’ve been thinking a lot about artificial intelligence over here at The Insightful Leader. And what we’ve mostly heard from experts is that the best way to understand AI is to play around with it, to get your hands dirty. Master the art of the prompt and see where this thing may—or, honestly, may not—fit into your life and your workflow.
But one Northwestern professor just wants to remind everyone that there is a time and a place for playing around.
Kristian HAMMOND: Once you start entering the world of “playing around,” you end up making mistakes. Because you’re not just playing stickball, you’re playing stickball in the street. And every once in a while, a car will come by.
PAVIN: That’s Kristian Hammond. He’s a computer science professor at Northwestern’s McCormick School of Engineering and the Director of the university’s Center for Advancing Safety of Machine Intelligence. He helped start Kellogg’s MBAi program. So he’s been thinking about AI from every possible angle for years.
And he says it’s great that leaders and employees are playing around with AI tools, as long as they aren’t betting the farm on them. Even so, Hammond says it’s a good idea to get proactive about how you and your teams use this tech. Because it’s powerful. And it carries risks that you definitely should be thinking about.
I talk to Hammond about those risks and how to dodge disaster, next.
…
PAVIN: I brought Hammond into the Kellogg studio for our chat. And, for years, he’s been living, sleeping, breathing AI … before all of you. He co-founded a company that used AI to write stories using data, which sounds pretty normal now. But the kicker is he did that more than a decade ago. Today, he talks about, teaches, works with, and consults about AI.
Naturally, he was the perfect person for this chat.
So I wanted to start by taking his temperature on one of the ways businesses and their employees were currently using AI.
PAVIN: Let’s say there’s a company that’s using generative AI. It’s using it to design products. It’s like a toy company. And you can imagine that being great for getting ideas and analyzing customer feedback on prototypes, but where are there risks that this kind of company might not be considering?
HAMMOND: Well, if you’re using generative AI as a brainstorming tool, that’s a great use case because there’s nothing at risk. You generate hundreds of possibilities, you bring them down to three, and then you start working on those three. And having an AI system that is making suggestions that are outside the box for you, that’s fantastic. The place where it gets dangerous is when you start depending upon the output without thinking about how to control it, how you’re going to evaluate it, how you’re going to make sure it’s right. It’s true. It’s correct. And it’s in that moment that it suddenly becomes risky.
PAVIN: The good news here is that Hammond thinks most businesses know AI’s output needs to be looked at carefully. But you can see how it would be tempting to ignore some mistakes for the sake of speed. Hammond says that can be a problem.
Like, imagine you’re a financial advisor with a wealth-management company. Part of your job involves creating reports that your clients use to inform investment decisions. You and your colleagues are told to use AI to draft your reports because it’s insightful and saves you a bunch of time. The problem is that federal interest rates, tax laws, and other metrics are changing all the time, and that report you just generated might be based on some outdated information. And now your clients are making financial decisions with that bad info. It’s a domino effect from there.
Now, if you were the leader in that situation, you’d think the natural solve for this would be to add an extra layer here—an editor of sorts who could be an extra check on that output, right? Well technically, yeah, that’s probably going to solve you some big headaches. But if you zoom out, that solve kind of creates a broader problem for you.
HAMMOND: Of course, that stops you from being able to scale. If I have to have a human read every single thing beforehand, we’re going from an afternoon to six months. And the whole point of automation is scale, but the whole issue with language models is the inability for us to validate them at scale.
PAVIN: What, I mean, I … I’m sure if you had a crystal ball you would know … is what I can imagine the answer to this is, but do you think that we’ll get to the point where you don’t need the human to fact check it? Are there solutions on the horizon?
HAMMOND: At some point, we will have automated systems that can validate the outputs of things like language models at scale.
PAVIN: For now, we live without that tech. So we can’t confidently generate things like consequential financial reports and not look them over before shooting them out to people.
A better way to use AI right now, Hammond says, is for internal communications. Sending emails and making presentations. The stakes are lower.
HAMMOND: If it’s inward facing, by its nature, there’s always a human in the loop. Because it’s not as though, I mean … even when you write your own stuff, you read it before you send it out. And it’s just that the machine now can help you out with it.
PAVIN: So moving on, what are some other risks companies maybe aren’t thinking about with their employees just sort of doing what they will with AI?
HAMMOND: This notion of proprietary information. That is, you put client information in a prompt, and if you’re just using the public version of ChatGPT, for example, if you’re using that, then they own that data. And if they use that data in a training session, it’s really, really, really hard to tease it back out of the system—but, if you know what you’re doing and you can craft prompts where you know what you’re looking for and you know who put it in and you know the company, you can get a prompt that’s tight enough so you can get at least a little bit of it back out. Uh, the ...
PAVIN: What do you mean? Like, who … who is putting in the prompt and getting what back?
HAMMOND: Well, it’s like, imagine I’m a brand manager for Procter & Gamble, and “I’m looking at a new product and trying to decide how to market it to women in the Midwest. What would you suggest that I say?” But if someone was a product manager for Procter & Gamble, and is actually putting in information about marketing to women in the Midwest, that’s enough of a prompt context. So it squeezes the system’s thinking down to the most likely solution or the most likely answer will be an answer I’ve seen before. And it will just give you that or a variant of that. The likelihood of that happening is small, but if you decide you’re going to make a concerted attack, it’s something that you can do with the toolset that we currently have.
PAVIN: What Hammond is saying here is something I don’t think everyone using these tools realizes. Your conversations are used to train the models. Mostly, this is the case with free or public versions, unless you explicitly opt out. And that’s a problem because, well, let’s say you need help writing an email to a client about supply-chain delays. To get a good response from the model, you have to give it information about your suppliers, logistical challenges, and client relationships. So you do. Well, that information is now with ChatGPT and can be used to train it. So let’s say it is used to train it. Now someone, possibly with bad intentions, can extract that information if they ask ChatGPT the right prompt. They can ask it to give them hypotheticals about how a certain company, with a certain raw material, in a certain geographic area might approach its client communications. And they’d be able to get that intel.
That’s disturbing. But Hammond said there’s a solve. Get an enterprise version of AI that you pay for, because it will have a firewall. There’s a version of OpenAI embedded in Microsoft Azure, for example. That kind of thing.
HAMMOND: The deal there is we will hold on to your data but not use it in training for 30 days, and then we’ll get rid of it.
PAVIN: To recap: AI is good for brainstorming and generating ideas. It’s good for making you sound more polished in emails. It’s good for the annoyingly time-consuming administrative internal processes that are lower stakes. It’s not great if you’re trying to scale outward-facing operations because that takes humans out of the loop, which means errors will make it through, and that will be consequential. And finally, if you’re a company with data to protect, steer clear of the public versions of AI models. Pay for an enterprise version with a firewall.
But Hammond had one more thing to say on risks that I want to land on. About deep fakes and phishing. Phishing, you might remember, is where someone pretending to be your boss or a colleague tricks you into disclosing important personal information. Traditionally, that’s been done via email. But Hammond says that’s changing.
HAMMOND: We’re now in a world in which it’s not just that you get a piece of email, it’s that you get a Zoom request and suddenly you’re talking to somebody who seems to be your boss. And that person tells you to do things that are probably within the purview of your normal day-to-day life. So, transferring money. And it could be, that’s a lot of money and it’s a great conversation. And your boss was never there. The CEO was never there. The COO was never there. The CFO was never there. But they were all three there because we can actually create avatars and deep fakes based upon videos that are already online and voices that are online and they’re controllable, and they’re controllable in real time, and we’ve already seen this.
PAVIN: AI is helping scammers take fake-outs to entirely new levels. And there was one he mentioned that truly knocked my socks off. It’s around the hiring process.
HAMMOND: It could very well be that the person you’re talking to on the Zoom meeting does not exist. Their resume was great. They look great. They talk great. They’re scripted, and there might be somebody really smart that’s behind there, but, you know, that’s not that person you’re looking at. They work for you remotely. That’s great, because they’re going to be on Zoom all the time. That’s fantastic. So you hire them, you give them all the credentials you want. That’s a real risk.
PAVIN: Are there any telltale signs that you’re dealing with that kind of applicant, or is there anything a company should be aware of?
HAMMOND: The biggie is, if it’s too good to be true, maybe it’s not true. It used to be, you know, you’d get a piece of email saying, “I’m a Nigerian prince and I need twenty-thousand free up one-hundred-million. And people looked at that and thought, “that’s unbelievable.” And it’s like — of course it was. It was designed to be unbelievable because they didn’t want you to respond to it. They wanted somebody who was foolish to respond to it. And now we’ve got the technology to a level where we no longer require foolish people to respond to it.
You’ve got to think about how do you verify? How do you validate? How do you ask the questions to make sure that what you are dealing with is real? And, again, be skeptical. If, you know, if your boss gets hold of you on Zoom and tells you to do some paperwork and it’s gotta be in by the end of the day, that’s one thing; your boss tells you to move a hundred thousand dollars from one bank account to another, you might have a moment where it’s like, “I don’t think we do that.”
And here’s the thing: it will not be unreasonable to do that kind of thing. We will live in a world where actually saying, “do you really want that? Is this really you? Is this really for us?” when you’re asked to do things that look like they might be untoward. We have to realize that we’re developing things that are very, very smart. And very, very smart things can be used for evil ends.
PAVIN: Always is the case.
PAVIN: This is all a lot to think about if you’re a leader. We’re in totally uncharted waters. But having a more clear picture of what the risks are should help you figure out how to navigate it all a little better.
That’s our show for today. I’m Laura Pavin.
[CREDITS]
PAVIN: This episode of The Insightful Leader was produced and mixed by Andrew Meriwether. It was produced and edited by Laura Pavin, Jessica Love, Fred Schmalz, Abraham Kim, Maja Kos, and Blake Goble. Special thanks to Kristian Hammond. Want more The Insightful Leader episodes? You can find us on iTunes, Spotify, or our website: insight.kellogg.northwestern.edu. We’ll be back in a couple weeks with another episode of The Insightful Leader Podcast.