“Game theory,” says Ehud Kalai, a professor of managerial economics and decision sciences at the Kellogg School of Management, “is a mathematical tool to deal with interactions that involve incentives.” When they apply their skills to sociological, economic, or even biological situations, game theorists hope to gain an understanding of the underlying processes involved in those situations. And although game theory traces its history back to the 1920s, it is continually open to refinements in its meaning and application.
A significant strand of current research in the field involves the information available to individuals involved in games, an inclusive noun that refers to business deals, negotiations, and other enterprises as well as simple contests of mental skill. Following what has become a tradition in Kellogg’s Managerial Economics & Decision Sciences department, Kalai has made significant contributions to understanding the flow of so-called Bayesian games, in which the participants have what game theorists call “differential information.”
A recent paper coauthored by Kalai, Robert Weber, a professor of managerial economics and decision sciences at the Kellogg School, and Oliver Gossner of the Paris School of Economics and the London School of Economics focuses on two related aspects of differential information: information independence and common knowledge. The project outlined in the paper involves highly technical mathematical operations. According to Kalai, it contributes “a foundational part of game theory.”
Who Knows What?
When two or more individuals undertake any sort of negotiation or competition—for example, selling a house, bidding in an auction, or applying for one of the limited places in a popular college course—proponents of game theory regard them as opponents in a game. The ways in which opponents play a particular game depend critically on the amount of information each possesses—both their own information and the information that they think their opponent has.
To grasp the informational variables in play in game theory, imagine a simple game that matches the seller and potential buyer of a house. Some details about the sale are readily available; they include the asking price, the condition of the house, and the real estate taxes that it incurs. “Game theorists refer to this information as common knowledge variables,” Kalai explains. “Everybody knows the value of the variable, but also everybody knows that everybody knows it, and that everybody knows that everybody knows that everybody knows it, and so on.”
Another situation occurs when players in the game possess unique types of information. This might include how much the seller and the potential buyer like the house in idiosyncratic ways. Here, Kalai says, “the fact that he likes the house a lot does not have any influence on the probability of her liking the house, and vice versa. In such a situation we say that the types are drawn independently of each other, or simply that it is a game with independent types.”
A Condition of Information
In one part of their project, Kalai, Weber, and their European-based colleague studied a condition of information known as independence under common knowledge. This condition means, Kalai explains, that while the information available to the players may not be independent, “there is a common knowledge variable such that beyond this variable the information available to the players is independent of each other.” Put differently, such a variable “disassociates” the information of the players.
To illustrate that situation, Kalai and his colleagues imagine a game in which each player’s mood can affect the way he plays. “Since players care about the behavior of their opponents, it is important for them to know the moods of their opponents,” he says. “But unfortunately, each knows only his own mood.”
“With knowledge of the state of the sun, the players’ moods are independent.” — Kalai
Canny players will undertake a statistical analysis of the likelihood of opponents’ different moods. The analysis can be easy or difficult according to the circumstances.
The condition of independence under common knowledge deals with scenarios that are easy to analyze. Imagine that every player’s mood is equally likely to be happy or depressed if the day is cloudy, and 90 percent likely to be happy and hence just 10 percent likely to be depressed if it is sunny. Overall, the players’ moods are not independent. However, Kalai explains, “with knowledge of the state of the sun, the players’ moods are independent. For example, if a statistician who knows that it is sunny is asked about a player with an unknown mood, he would assign 90 to 10 chance to happy or depressed, regardless of any information about the mood of other players.” Thus, assuming that the state of the sun is common knowledge to all players, all have clear knowledge of the other players’ mood probabilities in every state, whether that be sunny or cloudy.
Now imagine that the mood controller is not the presence or absence of the sun but the level of ozone in the air the players are breathing. This is a more difficult scenario, and one that fails the condition of independence under common knowledge. The moods of the players are now independent, given the current ozone level. But while sunny or cloudy weather is common knowledge, the precise ozone level is not. “Even when a player knows the ozone level, he does not know that his opponents know it, and certainly he does not know what they assume about the knowledge of their opponents,” Kalai explains. “In other words, while the ozone level disassociates the mood probabilities, players have no common knowledge of these probabilities.”
Independence under Common Knowledge and Subjective Independence
Independence under common knowledge is not the only condition that studies of game theory have to consider, though. There is another, seemingly different mathematical condition known as subjective independence. Each player, based on his own information, views the world as though the knowledge possessed by the other players is completely independent of each other. Mathematically, this condition is easy to understand, but the mechanisms behind it were not before the work of Kalai, Weber, and Gossner.
Applying the complex mathematics of game theory to the situation enabled Kalai—himself a Ph.D. mathematician—and his colleagues to gain a deeper understanding of the subjective independence of information. “A main message of our paper is that the hard-to-understand condition of subjective independence may be reinterpreted,” he says. “In games with only two players, this condition is meaningless, since the opponents of a player must be independent of each other; every player has only one opponent. But in games with three or more players, subjective independence is exactly the same as independence under the common knowledge. In other words, in any situation in which every player thinks subjectively that his opponents are independent there must be some objective common-knowledge variable, such as the sun in my example, that disassociates the types of [information possessed by] all the players.” Put simply, Kalai, Weber, and Gossner show that information under common knowledge—a well-understood part of game theory—is equivalent to subjective independence, just from a different mathematical point of view so long as there are three or more players.
And the take-away message of the research: “In the process of discovering and proving the informational relationship,” Kalai continues, “we learned much more about disassociating variables, common knowledge variables, and interesting relationships between them.” To put it more pithily, he adds, “We’re laying a foundation for users of game theory who deal with and analyze strategic interactions.”
About the Writer
Peter Gwynne is a freelance writer based in Sandwich, Mass.
About the Research
Gossner, Olivier, Ehud Kalai, and Robert Weber. 2009. Information independence and common knowledge.” Econometrica 77: 1317-1328.
Add Insight to your inbox.