Podcast: You Had Me at “Bleep Blorp”
Skip to content
The Insightful Leader Live: How to Talk about Your Work (and Yourself) So People Listen | Register
Innovation Social Impact Nov 3, 2016

Podcast: You Had Me at “Bleep Blorp”

How humans and robots are learning to trust each other.

A man and his coffee maker attend couples counseling.

Lisa Röper

Based on the research and insights of

Adam Waytz

Rima Touré-Tillery

Brenna Argall

Todd Murphey

Listening: How Humans and Robots Are Learning to Trust Each Other
0:00 Skip back button Play Skip forward button 18:23

We often have complicated relationships with machines, whether we are anthropomorphizing our navigation system, expressing our “love” for our phone, or getting creeped out when technology gets “too human.”

But do we trust these machines… and do they trust us?

Kellogg Insight takes a multifaceted look at human-machine trust with four researchers who approach these relationships in different ways. Kellogg professors Adam Waytz and Rima Touré-Tillery look at how designers can build human trust in nonhumans by integrating human characteristics—while avoiding the “uncanny valley” of over-humanization. Brenna Argall and Todd Murphey, from Northwestern’s McCormick School of Engineering and Applied Science, design robots that can learn to trust humans.

Podcast transcript

[music prelude]

Fred SCHMALZ: It is pretty much impossible to get through the day without using a lot of machines. ATMs, laptops, Siri…. The list goes on and on.

And yet as indispensable as machines have become to humans, our relationship with them has always been a little … tense. How much can we really trust them?

Welcome to the Kellogg Insight podcast. I’m your host, Fred Schmalz. In this episode, we explore what machines can do to earn our trust.

And in a twist, we also investigate how we can earn their trust. So stay with us.

[music interlude]

SCHMALZ: From HAL 9000 to the terminator to Wall-E, technology has long captured our imagination—and our unease.

Adam Waytz is an associate professor of management and organizations at the Kellogg School.

Adam WAYTZ: I think a lot of the prior theorizing around human–computer interaction, even before we had the advanced technology and robotics that we do today, was really concerned about this idea of the uncanny valley, that people will be too creeped out by technology that’s too humanlike, or people are going to be averse to technology overtaking them in the workforce—these old science fiction tropes of robot uprisings and things like that.

SCHMALZ: A mistrust of machines does make a certain kind of sense. Even if we don’t believe that robots are plotting to become our overlords, aren’t they going to take our jobs? Crash our cars? Destroy our ability to memorize phone numbers and read maps?

The Trust Project is a unique body of knowledge, connecting scholars and executives from diverse backgrounds to share ideas, research, and actionable insights in a series of videos for research and management. Learn more about the project and its development in conjunction with the Kellogg Markets and Customers Initiative.

WAYTZ: Human beings really like autonomy. We like having free choice, we like having agency, we like having control, we like freedom and liberty. We feel that when we give tasks to robots, or when we create humanoid technology, that that technology will usurp our agency and diminish our autonomy, and diminish our freedom.

SCHMALZ: Yet despite all of these reasons to be suspicious of technological advances, Waytz has found that we do, in fact, trust machines—particularly ones that look and act like us.

WAYTZ: Typically when you humanize technology, people tend to like it more. But in fact what we find in virtually all of our studies is that humans are surprisingly willing to trust technology when it has humanlike characteristics.

SCHMALZ: To understand why this may be the case, consider one of the most talked-about technological advances in recent years: self-driving cars. In one study, Waytz and his colleagues created a highly realistic self-driving car simulator with a humanlike voice, a human name, Iris, and a gender, female. The objective was to see how participants would respond to the machine’s autonomous steering and braking, as compared to participants who were given a simulator with no voice that performed the same functions.

WAYTZ: When you give something a humanlike voice, what we found was that that triggered perceptions in our participants that, ‘Oh, this car is smart. It can feel the road. It can plan where I need to go. It has a mind like a human.” What we found, and we showed this statistically as well, is that when you give the car a little bit of humanness in terms of voice and name, it increases the perceptions of the car as having a humanlike mind, being intelligent, and those ascriptions of intelligence then produce greater overall trust.

SCHMALZ: Those participants who drove in a “voiceless” simulator were less trusting of the machine—and more likely to be stressed out—than those in the “person-like” machine.

There is an important nuance to this relationship, however:

For trust to really take hold—for us to truly be comfortable “botsourcing” tasks to our mechanical brethren—there needs to be a good “match” between the machine itself and type of work it does.

Again, consider that study with the driverless cars. Not just any voice would do for Iris. Waytz hired someone with a specific vocal quality.

WAYTZ: She was not extremely robotic, but she conveyed a lot of confidence and certainty in her voice, and that was the kind of human you want for the task of driving you around, and around obstacles, and around long distances, and on the highway.

SCHMALZ: This concept of a “match” explains why some work is easier to imagine outsourcing to robots than others.

The first step in finding that match is to think of jobs as divided into two categories: cognitive and emotional. Cognitive work, says Waytz, relies on calculation and reasoning and other higher-order cognitive processes.

WAYTZ: On the other hand, there’s also more emotional work that involves feeling and sensing pain and pleasure, and dealing with social interaction. People consider robots to be cognitive but not emotional. They’re willing to ascribe that cognitive aspect of humanness to the robots, but not the emotional aspect of humanness.

SCHMALZ: In other words, most of us would be more open to robotic financial analysts than mechanical social workers.

However, you can change how comfortable people are with the idea of “botsourcing” a job by simply framing it differently.

Describe something like surgery as involving timing and precision, and we think, huh, I bet a robot could do that. I’d hire a robot surgeon. But frame it differently…

WAYTZ: If people are coming in with the idea that surgery is really an emotional task that involves recognizing that you are dealing with a human being that has a life history and pain and pleasure, and your job is to develop some empathy for that patient, but also suspend that empathy during the moment of creating an incision on the patient’s body, and this really requires a lot of emotional intelligence, then people will say, “Well, a robot doesn’t sound like a great entity to carry out this procedure.”

SCHMALZ: But there is a workaround, Waytz says, and it all goes back to that “match” between the machine and the work. If people can’t be convinced that the work is cognitive, maybe they’ll buy into the idea that the machine is emotive.

In fact, Waytz has some concrete advice for designers of robots that perform more emotional tasks, like caregiving.

WAYTZ: There’s great work in the realm of robotics and human–computer interaction that’s actually looked at what features of a robotic face convey emotion. These are typically facial features that convey kind of a baby face. People were across the board much more averse to outsourcing emotional jobs to robots, except when we presented a robot that had this kind of baby face to it and therefore appeared capable of warmth and emotion, and feeling, and interpersonal sensitivity.

[music interlude]

SCHMALZ: So humanlike characteristics—particularly ones that align well with the job at hand—make people more likely to trust machines.

But research by another Kellogg School professor suggests that when inanimate objects possess too many human traits, it can backfire.

Rima Touré-Tillery doesn’t study machines or robots. She’s an assistant professor of marketing, and she studies what we’ll call, “spokesthings”—you know, like the talking raisins or scrubbing bubbles that try convince us to buy a specific product.

Her research finds that some people experience these spokesthings as highly trustworthy—more trustworthy than actual humans. But other people don’t feel that way.

So, how do you know which people will trust a talking M&M?

Rima TOURÉ-TILLERY: We found out that it really depended on people’s beliefs about human nature.

SCHMALZ: For people who generally trust other people, it doesn’t much matter whether you’re hearing a message from a person or a spokesthing. But for people who are generally suspicious of fellow humans, the story changes.

TOURÉ-TILLERY: If I believe that humans are inherently good and can be trusted, then I am probably not going to be too affected by the nature of the spokesperson, but if I believe that humans are inherently ill-intentioned, bad, and cannot be trusted, then I’m more likely to like or to prefer something that is less human, something that is a talking object or an anthropomorphized spokesperson.

SCHMALZ: And while marketers can’t know how much inherent trust each and every person in their target audience has about their fellow human being, prior research has shown trends among some broad demographic groups. Those lower in socioeconomic status tend to be lower in trust, as do those in interfaith marriages, and even first-born children. Which means that, for certain target audiences, a spokesthing is the way to go.

TOURÉ-TILLERY: If we’re delivering a health message, or things for which people tend to be defensive, or less trusting, this might be a way to get around that.

SCHMALZ: Touré-Tillery and her colleagues also manipulated how “humanlike” these spokesthings are—by adding some eyes here, a smile there.

In one study, she went a step too far. A lamp with just eyes was considered less than human, But adding an extra humanlike trait—arms—made people respond to the spokesthing as if it was human. So any benefit of not being human was lost on those more distrustful of others.

TOURÉ-TILLERY: The more human they get, the more I’m going to treat them like a person, and if I’m low in trust, if I have really negative beliefs about human nature, then that’s going to be transferred to those objects as well.

It needs to be something that’s talking to me, that’s trying to persuade me, but that is not quite human. This lack of humanity, this partial humanity so to speak, is really important.

SCHMALZ: Which is interesting, given that we all have a natural tendency to anthropomorphize.

TOURÉ-TILLERY: Human beings spontaneously anthropomorphize. It’s something that helps us make sense of the world, gives us a sense of control. We feel like we understand other people, so when something happens that we don’t understand, when we’re confronted with an event or a thing that we can’t really wrap our minds around, it’s easier to think of it in human terms.

[music interlude]

SCHMALZ: Questions about how much we should trust robots and other nonhumans are not new. But have you ever wondered the opposite: How much robots should trust us?

It turns out that some engineers spend a lot of time thinking about just this question.

Brenna Argall and Todd Murphey are professors at the McCormick School of Engineering and Applied Science here at Northwestern University.

They work on a variety of projects that explore how robots can be designed to understand us: what we are going to do next, how we can be helped, and just how much we can be, well, trusted to do something on our own, without the help of robots.

Let’s return to that example of a driverless car. We humans need to know that the car won’t drive us off a cliff, but the car needs to know how much control it should cede back to us in situations where automation is particularly tricky, like bad weather or a downtown intersection congested with pedestrians, bike messengers, and aggressive cabbies.

Under such conditions, our car might monitor what we do, looking for patterns that may predict how much we can be trusted.

Todd MURPHEY: There are behavioral aspects of driving that one might choose to specifically monitor.

SCHMALZ: That’s Todd Murphey.

MURPHEY: For instance, the way that we observe the world, that can be measured through eye tracking and head tracking. When we are turning a corner, our behavioral aspects of turning and looking to the left, looking to the right, making sure that there are no pedestrians. We know that we are checking for pedestrians, but the automation could simply be looking to make sure that you reliably move your head in a particular way prior to turning and correlate that to your successful outcomes, hopefully involving not hitting pedestrians.

SCHMALZ: In other words, the car is learning to trust.

MURPHEY: I think this idea of modulating trust over time based on the history with an operator is important because we don’t want to be incredibly conservative with people who’ve earned the right to make decisions in settings that are too complex for the automation to handle. On the other hand, we don’t want to be accidentally asking people who can’t make those decisions to make them just because the world has become complicated and the automation is struggling.

SCHMALZ: The dynamic nature of that give and take is something Brenna Argall considers in her design.

Brenna ARGALL: We also want this measure of trust in the person to adapt over time. Not only because we need to build up this history in order to correctly estimate their abilities, but also because their abilities are going to be changing. They might be becoming better drivers, or they maybe didn’t get any sleep last night and today they are an awful driver, and things like that. We want to also be responsive to changes in the human.

SCHMALZ: And this idea that the same person’s abilities can change from day to day—it affects a lot more than driverless cars.

Argall and Murphy work on this problem is in the context of exoskeletal robots—or wearable robotic devices that can assist people in rehabilitation settings.

These robots—which may look like a motorized pair of pants or an arm-length sleeve—provide support and strength, enabling users to engage in physical therapy more safely.

So say someone is regaining the ability to walk again after an accident. They may need help standing or balancing, without the risk of falling over. Yet, over time, as they get stronger they’ll need less and less assistance.

Adapting to its users’ ever-changing needs is kind of the point of robotic-therapy devices. Here’s Murphy again.

MURPHEY: The entire goal of integrating a robotic-assisted device or a robotic-therapy device into someone’s physical actions is that we think that we’re going to successfully make them better at something. That means that how we interpret their behavior has to change over time and that the degree to which we support them has to change over time.

SCHMALZ A huge challenge is trying to get enough good data from a user, especially in circumstances where staying safe doesn’t provide much helpful input.

MURPHEY For instance, if you are standing still and you’re successfully at the upright position, you’re really not doing very much, and so there’s not very much in the data to infer anything about trust from. You only can tell if someone is doing well or doing poorly when they are away from that upright posture and are able to either come back to upright or start falling over.

Similarly, driving on the interstate on a clear sunny day when there’s no other traffic and you’re just going along at the speed limit, you’re not doing very much that’s interesting enough to form an opinion about whether or not you’re trustworthy.

SCHMALZ: He compares the machine’s job to that of a coach trying to simultaneously train an athlete—test them, see what they are capable of—but also keep them safe.

Ultimately the challenges of designing machines to work with humans—with each party ceding control to the other—instead of for them, are really, really daunting.

Here’s Argall again.

ARGALL The humans are really good at some things and the autonomous systems are really good at other things. Knowing actually when that trade of control would happen is super important. It has to be that both the automated system and the human have a really, really good understanding of the other one and what the other one is responsible for. Because if the autonomous system doesn’t give up that control, that’s going to be a problem. If it gives it up and the human doesn’t realize it’s being given up, that’s going to be a huge problem.

SCHMALZ: But even understanding who’s in charge of what seems to imply that both parties have access to the same information—that they see a situation the same way.

And they don’t.

MURPHEY In robotics, it’s pretty common to use sensors that do not have parallels in human perception. There’s regions of the electromagnetic spectrum that sensors use that people don’t have access through their visual systems. There are certainly regions in auditory signaling that we do not have access to. That means that the car is going to have perceptual access to lots of environmental behavior or lots of environmental variables that people do not have any intuition about. At the same time, the person has environmental information that the vehicle does not have access to.

Right now decision-making is always these two agents that are observing each other and then assessing each other accordingly. I think the two agents being able to explain their decisions to each other in ways that make sense to each other is both incredibly important but also incredibly hard.

SCHMALZ: In an ideal world, robots and people working together should be more capable than either alone. But getting robots and humans on the same page is a big challenge.

So which piece of that challenge is ultimately more difficult: Getting humans to trust robots, or robots to trust humans?

ARGALL They’re both hard because of the human, to be honest.

[music interlude]

SCHMALZ This program was produced by Jessica Love, Fred Schmalz, Emily Stone, and Michael Spikes.

Special thanks to Kellogg School professors Adam Waytz and Rima Touré-Tillery, and Brenna Argall and Todd Murphey from Northwestern’s McCormick School of Engineering.

Thanks also to Kellogg professor Kent Grayson for coming up with the theme for this month’s podcast. Grayson is the faculty coordinator for the Trust Project at Northwestern University. Visit its website, where you can watch a video series that features different perspectives on trust in business and society.

You can stream or download our monthly podcast from iTunes, Google Play, or from our website, where you can read more on trust and collaboration. Visit us at insight.kellogg.northwestern.edu. We’ll be back next month with another Kellogg Insight podcast.

Featured Faculty

Morris and Alice Kaplan Professor of Ethics & Decision in Management; Professor of Management and Organizations; Professor of Psychology, Weinberg College of Arts & Sciences (Courtesy)

Associate Professor of Marketing

More in Innovation