Psychologists and behavioral economists have established that humans exhibit a number of fairly predictable biases in their decision making. For example, when offered a menu, people generally choose one of the first items on it or the very last item.
It would be easy to ascribe these tendencies to a simple quirk of evolution—a kind of mental tic encoded in our genes. But what if the cognitive biases we exhibit when making choices are not limited to us? What if these biases are basic emergent properties of any system forced to make decisions with limited mental resources?
That is one of the questions implicit in new work by Yuval Salant, an assistant professor of managerial economics and decision sciences at Kellogg. In order to get inside the “black box” of humanity’s irrationality when deciding between many options, Salant created a simplified mathematical model of how a machine—an “automaton”—would make choices.
Mimicking Human Behavior
Salant’s automatons are basically decision-making algorithms expressed mathematically rather than in code. Some of these automatons commit the same “mistakes” that humans do when making decisions. An automaton may exhibit a “primacy” effect, in which it tends to choose one of the first items on a list. An automaton may also exhibit a “recency” effect, in which it goes for the last item on a list. The ability of automatons to mimic human behavior, combined with the fact that they are built from mathematical first principles, means automatons can be used to formally analyze the fundamental tradeoffs that we face when making choices.
One of Salant’s most human-like automatons is based on a decision-making strategy known as satisficing. A satisficer is someone who establishes in advance what criteria an option must meet in order for it to become his or her choice. So for example, a satisficer at a restaurant might declare, “I’m in the mood for chicken,” along with a handful of other specifications, and the first item on the menu to satisfy all of them would be the one he or she chooses.
When decision-makers are satisficing, it is not hard to see why they would be more likely to choose an item that appears earlier on a list, because they stop as soon as they find one that meets their pre-existing criteria, and might not even look at every item in a list. Contrast this strategy with rational decision making, which is a standard assumption in many economic models. It turns out that making a decision “rationally”—which in economics terms means analyzing every possible option—requires a great deal of cognitive horsepower.
A rational decision-maker has to have available a memory “cell” or “state” for every single item on a list. “It becomes interesting when the number of items is large. If the number of items is 20 or 30, it becomes more and more difficult to maximize utility,” says Salant.
In computer science parlance, this means that rational decision making does not scale for large problems, because as the number of items an automaton has to sort through increases, the problem quickly exceeds the capacity of a system with limited memory. Salant’s mathematical analysis proves this idea, that any automaton with less capacity than is required for rational behavior is subject to some bias in decision making. In other words, limited memory is a possible explanation for why we see particular biases in decision making.
It is not just theoretical constructs that have limited memory and processing power—humans also have a finite and measurable supply of “working memory.” That is why it is so tempting to declare that Salant has found something like a universal model of decision making, even though his model does not take into account other constraints like the shortage of time with which we all contend.
Of course, humans are neither perfectly rational decision-makers nor strict satisficers, no matter how Spock-like or impulsive any individual may seem. Salant’s most compellingly life-like automaton lies somewhere in between these two extremes. He calls it a “history-dependent satisficer.”
A history-dependent satisficer has some ability to remember what options it has seen before, and to modify its criteria accordingly. The more memory this automaton possesses, the closer it gets to the choices yielded by rational decision making. “The more states the automaton has, the better a decision it can make,” Salant says.
Psychologists have established that working memory correlates closely with IQ, so one way of looking at this research is to say that the smarter the decision-maker, the better it is at finding the one best option, at least under the circumstances Salant modeled.
“I don’t want to make a conclusive statement, but recency and primacy biases are what the analysis predicts will be the behavior of decision-makers that behave optimally given that they don’t have enough cognitive resources,” Salant says.
The effects that Salant derives using mathematical analysis show up everywhere in the real world. Humans are more likely to vote for candidates who appear first on a ballot, more likely to click on options at the top of their computer screen, and more likely to give high scores to competitors who appear first and last in a competition. There was even a lawsuit in the 1980s in which the federal government forced American Airlines to stop putting its own flights above those of its competitors in American’s computer reservation system. (Having discovered that travel agents were much more likely to go with the first flight they saw, this turned out to be a lucrative manipulation for American Airlines.)
Building on this research, the next question Salant is thinking about is whether his mathematical model of a good-enough decision-maker can be usefully incorporated into existing economic models. If it can, it might help bring flawed human choices into models that would otherwise assume we all have the resources to be rational.
Related reading on Kellogg Insight