Machines are getting smart. Really smart. So what does this mean for us humans?
Short answer: the benefits are limited only by our imagination.
For a longer answer, here’s what our faculty have to say about machines as counselors, coaches, chauffeurs, and full-fledged intellectual partners.
Are therapist robots in our future?
In a recent study, Kellogg’s Eli Finkel and colleagues had participants share a difficult personal story to a robot named Travis. Sometimes Travis responded by nodding, swaying slightly to mimic breathing, and displaying supportive text, like “I completely understand what you have been through.” Other times Travis remained motionless and displayed flat lines like, “Okay, please continue.”
When Travis reacted by moving and displaying supportive text, participants rated it as more social and competent. They even leaned in and made better eye contact when they spoke to the robot, signals of warmth and openness.
In another study, participants felt better about themselves when Travis appeared emotionally attentive.
“We might not have to look too far in the future before robots might play an emotionally significant role in our lives,” says Finkel.
Therapy machines already exist, in the form of exoskeletal robots. These wearable robotic devices assist people in rehabilitation settings. Think about a mechanical pair of pants or sleeve that someone wears to help them stand upright or reach for an object.
These machines, designed to work with rather than for humans, are programmed to push us to our limits without letting us fall over. In a sense, their work is not unlike that of a coach who must elicit top performances from her athletes while still keeping them safe.
We use machines all the time. That doesn’t mean we are ready to let them get behind the wheel. What would it take to make our interactions with robots less fraught?
Research by Kellogg’s Adam Waytz suggests a way forward. “Typically when you humanize technology, people tend to like it more,” says Waytz.
He and colleagues gauged people’s responses to two different self-driving car simulators. They equipped one with a humanlike voice, and gave it a name (Iris) and a gender (female); they left the other voiceless. As the vehicle mimicked steering and braking, passengers were less stressed by, and more trusting of, the machine that spoke to them.
David Ferrucci thinks we are at the beginning of a beautiful friendship. With computers, that is.
“One of our human frailties is we think we know what we need to know to make decisions. Do we? How do we know we know enough? What are we missing?” says Ferrucci in a conversation with Kellogg’s Brian Uzzi. Ferrucci was the lead scientist behind the development of IBM’s Watson project.
Computers are uniquely capable of taking advantage of the plethora of data newly available to us, he says. They can collect it, filter it, analyze it—and soon enough, they will be able to explain it to us in a way we can understand.
In addition to helping us cope with huge amounts of data, this machine-as-thought-partner can help uncover our biases that may be skewing our decision-making, and ultimately help us make smarter, clearer decisions.
What will it take to get us there? In a recent podcast, Ferrucci describes a future where humans and computers grow up together.
“I don’t mean literally grow up with you when you’re a baby or something,” he clarifies. “But it has to interact, evolve, it has to be part of the process—just the way you work with a team of people. Over time, you get your own language. You get your own common model of how the world works around you. You can speak about it efficiently and effectively.”
Speaking of good decisions—it turns out that your emotions rule.
Stock traders about to buy or sell would be smart to get a handle on their own moods before they pull the trigger. And there’s an algorithm ready to help them.
Uzzi, along with a team of colleagues, analyzed 886,000 trade-related decisions and 1,234,822 instant messages from 30 stock traders over a two-year period. The researchers created an algorithm that tagged each communication with a probable emotional state—unemotional, extremely emotional, or between the two—based on the words the trader used.
The team found that traders made less lucrative trades when they were in a highly emotional state or a low emotional state.
“When they were at an intermediate level of emotion, somewhere between being cool-headed and being highly emotional, they made their best trades,” says Uzzi.
One eventual goal? A machine-learning program that could mine digital communications in real time to provide traders with feedback about their current emotional state.