Podcast: Will Machines Ever Truly Understand Us?
Skip to content
Innovation Dec 2, 2016

Podcast: Will Machines Ever Truly Understand Us?

The relationship between humans and computers is deepening. What does the future hold?

A person and a machine communicate in an attempt to understand each other.

Yevgenia Nayberg

Based on the research and insights of

Brian Uzzi

David Ferrucci

Sandra Waxman

Listening: Human Machines Understanding
download
0:00 Skip back button Play Skip forward button 21:58

Today, you may have used an Excel spreadsheet or statistical software package to assist you in calculating how much home you can afford, or whether visitors to your website are more likely to shop on a “baby blue” landing page versus a “power blue” one.

But you probably didn’t explain your reasoning to the computer, or ask it to join you in brainstorming what to do next. Will this change? What will human-machine partnerships look like in five years? What about fifty?

In this month’s Kellogg Insight podcast, we explore how machines can be used to help us make smarter, faster decisions today. And we ask some pretty heady questions about where our collaborations with computers are headed.

Podcast transcript

[music prelude]

Jessica LOVE: Earlier this year, I bought my first house. And I spent a lot of time crunching numbers: monthly mortgage payments, tax credits, commute times, and my least favorite, how much it would cost to replace a 45-year-old boiler.

When it came time to make an offer, I didn’t have to keep all these numbers in my head. I relied on Excel spreadsheets and a fancy online calculator that told me whether to buy or keep on renting.

In other words, technology helped me make a decision. And the help was great.

But maybe this only the beginning. Ten years from now, will we be asking a computer what features we want in a property? Or what career might make us happiest? Or what should our company’s strategy be?

Welcome to the Kellogg Insight Podcast. I’m your host, Jessica Love.

Last month, we had a really fun discussion about how humans and machines collaborate on physical tasks like driving or surgery or caregiving.

This month we ask: How can machines help humans become better at reasoning through problems and making decisions? How much will they need to understand us in the process?

Stay with us.

[music interlude]

LOVE: One way machines could help us make better decisions? Let us know when to make them.

Brian UZZI: Depending on our emotional state, we can either make good or bad decisions. In fact, there’s one really classic study where they showed that unless you reach some kind of emotional state, you actually can’t make decisions.

LOVE: That’s Brian Uzzi, a Kellogg School professor and faculty director of the Kellogg Architectures of Collaboration Initiative.

He says researchers have learned that decisions and emotions are really tightly coupled.

UZZI: While that’s a very useful finding, it’s very hard to use in practice, because how can you go about measuring people’s emotions moment to moment when they’re facing different decisions?

LOVE: So, in a recent study, Uzzi tried to do just that.

He actually asked two questions: What is the best emotional state to be in to make good decisions? And can a machine pick up on our emotional state—even when we may be unaware of how we’re feeling?

In his study, Uzzi and his colleagues analyzed data from lots of stock traders: the actual trades they made, and a whole bunch of instant messages that they sent to each other while they made these trades.

Could they use a trader’s messages to understand his or her underlying emotional state?

UZZI: I could say that the market is changeable today. I could say it’s tumultuous. I could say it’s volatile. Each one of those things get at an underlying concept of changeability in the market, but each one represents a different emotional state as I see the information.

So we said: if people are revealing their emotional states in their words, let’s create a machine-learning program that could monitor people’s language that they use throughout the day in their communications with other people.

LOVE: And these communications? There are a lot of them.

UZZI: You could think of teenagers texting; well, traders in the market text five times as much. They’re constantly texting. You have a very nice record of everything they say all day long.

LOVE: So the researchers created an algorithm that would go through the texts of each trader, assign each to a probable emotional state—unemotional, extremely emotional, and somewhere in between. Then they tried to find links between those emotional states and the types of trading decisions that the traders were making. Were they good ones that made money? Or were they bad ones?

UZZI: What we found was this. When traders are low in emotional states, they’re very cool-headed, they tend to make bad decisions. They’re too slow in taking advantage of an opportunity in the market, and they tend to hold on to bad trades too long. Exactly what you don’t want to do. We also found that when they were in a very high emotional state, they did the same thing. When they were at an intermediate level of emotion, somewhere between being cool-headed and being highly emotional, they made their best trades.

LOVE: In other words, just by analyzing the language that a trader was using, machines were able to predict more accurately than chance how good of a decision the trader made.

And the possibilities here are really interesting.

What if, over time, traders could use a real-time analysis of their texts to pick up on patterns of language use that are associated with poor trades?

Here’s Uzzi again.

UZZI: They then could get feedback on their emotional states and learn how to manage that and incorporate that into their decision-making, so they can make better decisions in the future.

LOVE: But of course machines have a lot more to offer than insight into our own emotions. In addition to helping us decide when to make decisions, they can tell us how to prioritize decisions.

This has huge potential to reshape a lot of industries.

Take something as routine as health inspections. They require inspectors to visit a physical site. But cities only have so many inspectors.

UZZI: Some of the most important inspections we do still go by a really old method that they call pinch-and-sniff. That’s not just a colorful term; it’s actually what they do. Okay? If they want to check meat and see if it’s good or bad, they’ll do a pinch-and-sniff.

LOVE: As you can imagine, all that in-person pinching and sniffing gets pretty costly.

So what some cities are starting to do is to figure out which restaurants are most likely to have health code violations. If they have this information ahead of time, they can direct their inspectors to these “at-risk” restaurants and ultimately offer the public better protection without adding resources. Here is where machines can come in.

UZZI: We now have, as it turns out, incredible amounts of data on restaurants. We can take all of these variables that otherwise are beyond human comprehension to figure out how they relate to health, but the machine can find the pattern.

What that means is that we no longer have to go person to person to each restaurant. We could have a monitoring system that says, “Hey, guess what? When you have this combination of food on a menu, when you have a kitchen of this size, when you enter a new worker into your workforce, you have a problem.” The machine can help us with all of that, and that allows us to scale human effort on a level we’ve never been able to do before, and that’s really why they’re going to change everything.

[music interlude]

LOVE: Clearly, more and more decisions are going to be made with the help of machines.

But David Ferrucci is pushing for something a little more ambitious.

David FERRUCCI: When you look at examples from Star Trek, Geordi is talking to the computer and they’re problem-solving together. This is much more about, “Give me the information I need, what am I missing, do the analogical reasoning, do the deductive reasoning, generate hypotheses for me.” It’s fluent and it’s precise and it’s logical and it helps me make a decision.

LOVE: Ferrucci was formerly the principal investigator of IBM’s Watson project. You know Watson—the computer system that beat Ken Jennings on Jeopardy!

Today Ferrucci is a senior technologist at Bridgewater Associates, and founder and CEO of a new artificial intelligence technology company called Elemental Cognition.

Ferrucci wants to see machines that don’t just solve the problems we tell them to, but ones that actually reason with us. He wants to build machines that think and communicate a lot like we do—machines that can be genuine collaborators.

FERRUCCI: Being that collaborative thought partner that can sit down there and fluently dialogue with you and say, “Well, what are you concerned about?” “I’m concerned about my inventory for the upcoming spring. What data do you have on that?” ‘Well, I know this, this, and that.” “Well, I’m thinking about taking a chance on this, so what are my odds considering what’s going on in Europe and what happened last spring?”

So now I’m having this fluent dialogue where the machine understands not just all the concepts, but how to map them into language and how to iterate on them with me. And that we don’t have today. We don’t have anything like that today.

LOVE: No, we really don’t have anything like this today. Today, artificial intelligence is actually pretty superficial. Computers can analyze a ton of data and pick up on all kinds of statistical relationships. But they don’t understand these relationships the way we as humans do: as real forces that act on real objects in a real world in real time.

We also have words that can precisely communicate our model of the world to someone else.

If you tell a human, “Oh look, that ceramic coffee mug is about to fall off that high shelf and onto the concrete,” they immediately recognize what those words mean, build a model of the event in their own heads, and deduce what will happen next … and it doesn’t look good for the mug.

[mug crashing]

But right now, computers don’t really recognize what mugs and concrete and falling really are. Sure, you can code up a simple model that says if a mug falls it has a certain probability of breaking. But the computer isn’t building the same scene that we build in our heads and actually deducing this conclusion itself.

It certainly can’t converse with us fluently about it.

At least not yet.

FERRUCCI: You want to build machines that can develop and engage a compatible understanding of the world to help you make decisions in your own terms. Not like the black box that says, “This happened in the past; I saw this pattern. Maybe it will happen in the future.” Rather, it’s sort of the open box that stands there and cognates about the world in the way you would. In other words, models it and thinks about it and talks about it the way you would.

LOVE: So how could this model come to fruition? Ferrucci imagines breaking down human thought into tiny pieces—kind of like the “periodic table of elements,” but for ideas instead of chemicals.

Machines would then learn how to combine these elements to have more complex thoughts. This learning would occur over time and, ideally, alongside humans learning the same things.

FERRUCCI: That machine has to grow up with you in some sense. I don’t mean literally grow up with you when you’re a baby or something, but it has to interact, evolve, it has to be part of the process—just the way you work with a team of people. Over time, you get your own language. You get your own common model of how the world works around you. You can speak about it efficiently and effectively.

LOVE: And this conversing back and forth with humans—that’s how Ferrucci envisions computers learning to understand us.

FERRUCCI: How does it simply learn what’s running a race in the park with your friends when prior to that all it really knew was that there were locations where people can be at different points in time?

My claim is that there are primitive cognitive abilities or elements that you can use, that you could build on. This will be the way a child will sit there and say, “Hey mommy or daddy, what’s a race?” “Well you start here, and you’ve got to move, you’ve got to end up there. If you get to the finish line first, you win.”

LOVE: But—and this is key—the machines wouldn’t reason exactly like humans. They would still have qualities that are different, meaning that while they would work with humans on our level, they would also bring something decidedly nonhuman to the table.

For one, they don’t share our human biases, like relying too heavily on the first piece of information we find, or overestimating how often really terrifying things like tornadoes occur near our home.

Computers also have a lot more processing power. They can seek out and analyze information from lots of places simultaneously. This could make our decisions more informed and much faster.

FERRUCCI: The nice thing about computers is that they’re virtual and can interact with thousands of people at once and integrate that information very quickly. Whereas I can’t integrate with your brain all that quickly. In fact, that’s what we spend most of our time doing: dealing with that bottleneck.

LOVE: Faster, more informed, more clear-eyed decision-making? Ferrucci’s vision is kind of the dream.

FERRUCCI: This vision is not new. This vision is as old as computers are.

[music interlude]

LOVE: So Ferrucci imagines machines that can converse with us and learn with us and share our understanding of the world. And based on his work on Watson, if anyone can do it, it’s probably him.

But just what kinds of challenges is he up against? How do humans converse and learn and understand the world?

To find out, we spoke with Sandra Waxman, a professor of psychology and a faculty fellow at the Institute for Policy Research at Northwestern University.

And to be clear, Waxman agrees that human–machine partnerships could have a lot of promise.

Sandra WAXMAN: If you’re having a hard time making a decision, it’s often good to have a creature or a machine that looks at the world differently or that can compute over different orders of magnitude of data, including your own data, which is what a machine is saying when it says, “Remember, you have a bias to think really….”

That’s a terrific example of a partnership that can be beautiful and wonderful and can work, but it’s also a terrific example of a partnership that’s there to further a particular goal and people aren’t created that way.

LOVE: As they exist today, machines don’t have their own agendas, their own goals, which could be at odds with ours. And most of us are probably okay with that. Even Ferrucci acknowledges that human goals and values will drive these partnerships.

But this fact alone likely makes teaching a machine really different from teaching another human.

WAXMAN: You never think of your infant or of your neighbor’s infant, what is this infant here to do for me? That’s just not what human commerce, human culture, human relationships are about.

LOVE: And Waxman points out another huge difference between humans and machines—one that might complicate efforts to teach machines the same way we teach children: machines don’t have the same emotional stakes that we do.

WAXMAN: I may think that I’m telling my kid about where to stop on the street corner when she crosses to go to kindergarten, but my kid is learning way more than that. She’s learning about what I’m happy about and proud of her for and where I’m scared that she might not stop.

LOVE: Can machines really learn to understand a speaker’s underlying intention, why humans might feel happiness or pride or fear?

It is hard to stress just how thorny these problems really are. But without understanding intentionality or emotions, computers may not be able to understand us—or what the heck we’re really talking about—at all.

WAXMAN: We’re continually astonished by how intricately interwoven thinking and language are, even in prelinguistic infants.

LOVE: And according to Waxman, for machines to learn the way humans do, it won’t be enough for them to simply experience situations alongside us. That’s because, to some extent, we are hardwired to focus more on some things than others. Namely, the things in our environment that are most likely to be meaningful to humans.

Take language learning. Babies don’t just treat every sound they hear the same way. Instead, we have evolved over millions of years to pay special attention to certain sounds. Like, human speech.

Anonymous Parent: Oh, is that a ball? Who wants to throw me a ball? Roll it over here. Roll me the ball.

LOVE: And what Waxman has shown in her work is that infants not only prefer listening to language—they also link this language to the world.

Have you ever wondered how babies figure out that guppies and marlins and trout are all members of a category we call “fish”?

In a series of studies, Waxman and her colleagues presented young babies with a cognitive puzzle: they asked whether the babies could detect that images of different fish could all be lumped into a single category. And they were specifically interested in knowing whether babies would be better at solving this puzzle if they listened to language while viewing the pictures, because hearing language would help clue them into the fact that all of these images had something in common.

So they showed the babies several pictures of different fish, one at a time. And then they showed them two pictures side by side: a new fish they hadn’t seen before, and something totally different, a dinosaur.

What they found was that language has a special effect. The babies who had been listening to the language noticed that one of the images belonged to this familiar “fish” category. But babies who had listened to other sounds, like tones, didn’t seem to form this category. They looked at both images the same way.

This tells us that even before babies can roll over in their cribs, they expect speech to link to something.

And you see the same “special” effect when babies hear other primate vocalizations too.

[Lemur call]

That was a lemur call. When Waxman ran the same study, but using lemur calls instead of human language, she found really similar results.

WAXMAN: When they hear the lemur call, the lemur vocalization, they form the category. The data from babies listening to language and listening to lemur calls at three and four months lay on top of each other. It’s as if, if you just looked at their data, you couldn’t tell who was listening to our primate cousins and who was listening to our actual human cousins. That’s crazy.

LOVE: Infants didn’t learn to pay attention to lemur calls. They’d never even heard lemur calls before.

And interestingly, infants only retain this keen interest in lemur calls for so long. Babies are experts in the social world they live in. As the days go by and they don’t hear lemur calls from their parents, or siblings, or other members of their community, they uncouple this link between lemur calls and categories.

WAXMAN: By the time they’re six months, human language still promotes object categorization all the way through the first year and beyond. The effect of the lemur call falls off. There are inborn expectations about certain kinds of signals that are going to be meaningful to me and meaningful in the broadest sense of the word.

How much does it help the learner to start with these innate predispositions or biases? If you’re a human learner it helps you a lot. If you’re not a human learner, I think it depends what you want that learning machine to do.

LOVE: Can machines be hard-coded to share our preferences—even if those preferences are a moving target? How important is it that humans and machines glean the same insights from the same experiences? Will we ever truly be able to understand each other?

These are some really tough questions, and we won’t have firm answers anytime soon.

But regardless of the precise form our mechanical collaborators take, we are going to be seeing more and more of them: showing us our blind spots, helping us to prioritize, and perhaps—in ways we can’t even imagine today—changing how we think.

[music interlude]

LOVE: This program was produced by Jessica Love, Fred Schmalz, Emily Stone, and Michael Spikes.

Special thanks to our guests Brian Uzzi of the Kellogg School, David Ferrucci of Bridgewater Associates and Elemental Cognition, and Sandra Waxman of Northwestern University’s Department of Psychology.

You can stream or download our monthly podcast from iTunes, Google Play, or from our website, where you can read more about artificial intelligence and human–machine partnerships. Visit us at insight.kellogg.northwestern.edu. We’ll be back soon with another Kellogg Insight podcast.

Featured Faculty

Richard L. Thomas Professor of Leadership and Organizational Change; Professor of Management and Organizations

Adjunct Professor of Management and Organizations