How to Stop Worrying and Love the Robot That Drives You to Work
Skip to content
The Insightful Leader Live: How to Talk about Your Work (and Yourself) So People Listen | Register
Innovation Marketing Mar 3, 2014

How to Stop Worrying and Love the Robot That Drives You to Work

Discomfort about “botsourcing” can be reduced by manipulating the human-like attributes of machines.

Yevgenia Nayberg​

Based on the research of

Adam Waytz

In his previous part-time career as a music journalist, Adam Waytz encountered an insight that would impact his research fifteen years later as an assistant professor of management and organizations at the Kellogg School. “I asked a musician if he thought Napster”—the online file-sharing service that disrupted the music industry between 1999 and 2001—“was good or bad,” Waytz recalls. “And the musician just said, ‘You can’t stop the future.’” Nowadays, as autonomous robots and other forms of artificial intelligence begin to disrupt other economic sectors, augmenting or displacing mental work previously performed solely by humans, Waytz became interested in investigating the psychological impacts of these technologies as well.

In one study, Waytz and coauthors Joy Heafner of the University of Connecticut and Nicholas Epley of the University of Chicago tested how adding or subtracting human-like features to a self-driving car affected the trust placed in it by its occupants. And in another, coauthored with Michael I. Norton of Harvard University, the researchers examined how reassigning certain kinds of jobs to machines made human workers react. The common thread between the two research projects, says Waytz, “is this question of ‘What gets to have a mind?’ Under what conditions are people willing to see nonhuman agents as having mental states: planning, thinking, feeling, and experiencing emotions?”

The Economics of Having a Mind

While interesting from a purely psychological perspective, Waytz says that how we perceive nonhuman minds matters economically when it comes to coping with the forward march of technology. “You see it in the news every single day,” he says. “Economists are calculating the impact of automation in terms of productivity gained and jobs lost.” But while we may not be able to “stop the future,” Waytz says, we may be able to learn to trust it—which is where anthropomorphizing features can have a real impact. “When you represent something as having a mind, that really enhances trust in that thing’s competence and reliability.”

Riders in the two autonomous conditions reported trusting the vehicle more in its anthropomorphized state.

Demonstrating competence and reliability will be crucial for manufacturers of self-driving cars. But Waytz and his colleagues were not merely interested in showing how much the occupant of an autonomous vehicle could be compelled to trust it. They wanted to examine something subtler: Would making the self-driving car more “person-like”—that is, giving it a voice, a name, and other psychologically individualized characteristics—make the rider trust it more or less than a self-driving car that was equally reliable, but offered no pretenses of being anything more than a mindless machine?

The researchers put their experimental participants into a “very realistic, very immersive” driving simulator and divided them into three groups: one would drive the simulation as they would a normal car; one would ride in a simulator programmed to control steering and braking autonomously; and the last would ride in a simulator also programmed to steer and brake, but which featured additional attributes such as a name, a gender, and a human voice.

Waytz found that riders in the two autonomous conditions reported trusting the vehicle more in its anthropomorphized state. Riders also trusted the person-like car more when it experienced a minor accident caused by another driver. In addition, these riders experienced less stress in response to the accident, as observed by a heart rate monitor and video footage of their faces.

“That’s a complex finding,” Waytz says. “When you’re in an autonomous car you’re certainly going to blame it for an accident of any type. But in this specific kind of accident that is clearly not your fault, anthropomorphic features attenuated blame because people gave the car the benefit of the doubt, as if it were a person.”

Dehumanizing People

These results, says Waytz, do not merely apply to autonomous vehicles. “Humanization [of technology] can also tell us about dehumanization of other people—treating them as mechanistic,” he explains. Waytz and his coauthor Michael Norton directly investigated this phenomenon, which draws parallels between the emotional effects of losing one’s job to a machine and losing it to a person of a nationality perceived as machine-like. “We asked people to reflect on whether their jobs were more rational or more emotional”—for instance, the difference between a financial analyst and a social worker—“and asked how it made them feel to consider that a robot would take it over,” Waytz explains.

Most participants were uncomfortable with the idea of having their jobs “botsourced,” but the researchers found that two tactics reduced that discomfort. One was to frame the job as more rational or cognitively mechanistic. “People tend to be okay with the idea of robots having that kind of agency,” Waytz explains. “You could describe the job of performing surgery as requiring a lot of reason or a lot of compassion. It is probably both, but when you describe it as being more about rationality, people are less averse to the idea of a robot doing it.” Additionally, adding features to a machine or software that “make it seem capable of emotion”—which could be physical, such as humanlike eyes, or nonvisual, such as a soothing voice—reduced aversion to botsourcing.

Waytz and Norton then linked these findings to “a literature on dehumanization that looks at how different nationalities are stereotyped,” he says. According to this literature, Chinese, German, and British people are often perceived as more rational and rigid compared to Australian, Spanish, and Irish people, respectively. “What our participants said about their jobs being ‘botsourced,’ they also said about seeing their jobs outsourced to nationalities which they perceived as stereotypically ‘robotic,’” Waytz explains. The more “rational” a job, the less uncomfortable the participant was with the idea of it being outsourced to China, Germany, or Britain.

Responding to Disruption and Displacement

As the future inexorably unfurls into the present, advances in technology will continue to disrupt industries and displace workers. Waytz says that his findings buttress those of economists who project that “knowledge work” jobs will become newly vulnerable to automation. “The most susceptible jobs are those that require cold, rational, rote cognition,” Waytz says. “Less vulnerable are ones that require a social or emotional competence.” But he says that his research is not about divining whether this trend is good or bad. “Now we know a lot about what people think about robots and humans—and what capacities people consider to be uniquely human,” he says. Whether it is in designing a safe, reliable autonomous car that people will trust or redefining the value of one’s own job against the shifting economic incentives of outsourcing or automation, “the important thing is to create a match between the nature of the job and the perceived capacities of the agent performing the job.”

Featured Faculty

Morris and Alice Kaplan Professor of Ethics & Decision in Management; Professor of Management and Organizations; Professor of Psychology, Weinberg College of Arts & Sciences (Courtesy)

About the Writer
John Pavlus is a writer and filmmaker focusing on science, technology, and design topics. He lives in Portland, Oregon.
About the Research

Waytz, Adam and Norton, Michael I. In press. “Botsourcing and Outsourcing: Robot, British, Chinese, and German Workers Are for Thinking—Not Feeling—Jobs.” Emotion.

Waytz, Adam, Joy Heafner, and Nicholas Epley. 2014. “The Mind in the Machine: Anthropomorphism Increases Trust in an Autonomous Vehicle.” Journal of Experimental Social Psychology. 52(May): 113–117.

Read the original