Data Analytics Jul 6, 2015
Can Computers Make Us Better Thinkers?
IBM Watson creator David Ferrucci on the thought partnership at the heart of machine learning.
David Ferrucci was the lead scientist behind the development of IBM’s Watson project, a computer system that went on to beat the top human champions on the television game show Jeopardy!. In the words of Kellogg School professor Brian Uzzi, Ferrucci has “broken the sound barrier of how humans and machines think.”
Uzzi and Ferrucci recently discussed the human-machine partnerships at the heart of machine learning and computational social science, as part of the Kellogg School’s first Computational Social Science Summit. Ferrucci is now with the investment management firm Bridgewater Associates. Uzzi is the director of the Northwestern Institute on Complex Systems and faculty director for the Kellogg Architectures of Collaboration Initiative. The interview has been condensed and edited for clarity.
to your inbox.
We’ll send you one email a week with content you actually want to read, curated by the Insight team.
Uzzi: In the creation of Watson, did computational social science play a role?
Ferrucci: Watson was able to do what it did only because it was connecting to what humans are creating, and how they’re sharing information. Humans are communicating digitally. They’re writing down every thought they have. Not only are they communicating in different forms—they are also rating, ranking and regurgitating or re-describing in a variety of different ways. All that information that humans are creating for each other is the kind of information that Watson trains on.
Even though Watson couldn’t look up the Jeopardy! question in a database and find the answer, it can take the phrases, the words, the concepts that are expressing that question and try to figure out: In what other ways might that information be expressed? If Watson can do that, then it can say “this is most likely the answer to this question, and here is why.”
Uzzi: If it hadn’t been for all of this crowd-sourcing of information from people all around the world, Watson wouldn’t have been possible.
Ferrucci: That’s right. There are other things that can enable Watson to exist. One is just the computing power. The other thing is machine learning.
Uzzi: When I think about learning, it basically means to gain more knowledge. Learning could be changing my point of view. It could be finding causal relationships. It could be an aha moment, like I’ve learned something even though I have no understanding of it, and so forth. Where are machines on this continuum?
Ferrucci: If I show the computer a bunch of pictures of sailboats and a bunch of pictures of cars, and it sees all the pixels in those pictures, then the human tells it, “This is a sailboat, that’s a car. That’s not a sailboat, that’s not a car,” I don’t tell the computer how to recognize the differences. I just give it the examples. I give it the input features. The output features are: sailboat, car, sailboat, car. The computer can learn that relationship. It can discover the differentiating patterns in those images without me telling it, “Oh, you have to look for the sail. The sail is typically white.” When the machine learns the function from the inputs to the outputs, we think of that as machine learning.
Uzzi: Let’s say the pattern to recognize is: if you are driving and you see a curve, you need to slow the car down. Let’s say some other part of the machine knows that when a surface is wet, it is slick. Could the machine then make the conclusion that a curve on a dry road and a curve on a wet road should be treated differently?
Ferrucci: The short answer is if it never experienced a curve that is wet, no.
How would a human make that connection? If they’ve had no experience with a wet road, they might not be able to make that connection either. If they had that experience and experience with the curve, and put the two together because they had a deeper logical understanding of what the forces were like, they could reason about it. How do they learn to reason? They need that logical model. They need that deeper understanding.
A human might learn that when they’re driving on a slick surface things change by actually having the experience and feeling what it’s like—or they might have learned it because someone explained it to them through language, through pictures and images and diagrams.
That deeper understanding: if it’s not there, the computer is not going to get that. How does it get there? One thing is I can literally tell the computer, “Don’t worry about getting that experience. I’m just going to tell you that relationship is true.”
“The more data we generate, the more smarts we’re going to need to figure out what the data is actually saying.” —David Ferrucci
Ferrucci: That’s right.
Uzzi: Then how does it get us any further than we already are on our own?
Ferrucci: We can all use the help. If a machine can faster and more completely, we’re all better off. It can help me see what the alternatives are and what the available evidence is for those alternatives, help me to weigh them, help me to think clearly about them.
Uzzi: We’re all very soon going to be able have sensors on our bodies that can pick up on things that we never could process before: Am I emotionally activated talking to you, or am I bored? Or do I eat too much when I’m around a certain locale? How do I make myself healthier and happier?
We’re collecting all this data but who’s going to be around to actually analyze it for us?
Ferrucci: The more data we generate, the more smarts we’re going to need to figure out what the data is actually saying. How do we get the signal away from the noise?
I think one of our human frailties is we think we know what we need to know to make decisions. Do we? How do we know we know enough? What are we missing?
There is a ton of information out there. filter it, funnel it, synthesize it, relate it to my situation, and explain it to me in a way that I can absorb it. That is the difference between just finding that function between the input factor and the output variable and really delivering an explanation for what’s going on that is consumable by me, the decision-maker.
Uzzi: Because what might feel right for you might not feel right for me.
What you’re talking about gives me goose bumps. It also scares me a little bit. I’m going to eat healthier because I’ll have feedback on when I eat poorly and when I eat well. I’ll exercise in a more efficient way. I could be a better father, giving the right kind of feedback, because I can observe what my children do as a result of it.
What if I find out things about myself that I don’t like? The information could be shocking, depressing. It could create anger. I might revolt against it.
Ferrucci: I would hope that our thought partnership with computers ends up making our thinking, our decision-making, our biases ultimately more transparent.
Uzzi: I love this term “thought partnership.” When people hear about Artificial Intelligence or they hear about machine learning, it feels very cold. It doesn’t have an attraction. Human beings are very much attracted to warmth in one another. Can you talk a little bit more about how you envision this thought partnership between human and machine?
Ferrucci: Forget about the machine for a moment. Think about people helping people. You have someone who’s a trusted friend, who knows you at a personal level, who is brilliant in their capacity to gather, collect and synthesize information to help you make better decisions for yourself, for your family and your job.
Whatever your values are, helps you achieve them with greater clarity of thought and clarity of purpose. That’s our thought partner. Who doesn’t want that?
Uzzi: Sounds amazing. It’s like I get my own personal board of directors, I get a teacher, and I get a close friend.
One of the ways you overcome an individual’s biases is you put them in a diverse group or a diverse network. That will expose them to multiple points of view. There is a phenomenon where everybody in the crowd is wrong but if you aggregate all those wrong answers, you’re actually getting as close as you possibly could to the right answer.
These are ways in which to try to advance human decision-making as well. Is there a sweet spot perhaps between the machine and these human systems, where you put two great things together and you get more than you could if it was just man and machine?
Ferrucci: Absolutely. The machine is learning from the crowd but then funneling that learning into the personal interaction.
It sounds abstract, but you see shadows of that today. Speech recognition has gotten so much better. It’s very contextual now. It’s exploring the context in which this word most likely follows that word based on everything: how on the web, what it looks like. It’s using your speech to learn better what sounds are followed by what other sounds, and what other words—through interacting with the crowd, understanding that, and mapping it.
Richard L. Thomas Professor of Leadership & Organizational Change; Co-Director, Northwestern Institute on Complex Systems
Getting children to make healthy choices is tricky—and the wrong message can backfire.
A conversation between researchers at Kellogg and Microsoft explores how behavioral science can best be applied.
Acquiring another firm’s trade secrets—even unintentionally—could prove costly.
Common biases can cause companies to overlook a wealth of top talent.
A new study suggests that firms are at their most innovative after a financial windfall.
Don’t let a lack of prep work sabotage your great ideas.
Training physicians to be better communicators builds trust with patients and their loved ones.
The fallout can hinge on how much a country’s people trust each other.
Tim Calkins’s blog draws lessons from brand missteps and triumphs.
Three experts discuss the challenges and rewards of sourcing coffee from the Democratic Republic of Congo.