Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Cognitive Psychology:**
Attention ·
Decision making ·
Learning ·
Judgement ·
Memory ·
Motivation ·
Perception ·
Reasoning ·
Thinking -
Cognitive processes
Cognition -
Outline
Index

In game theory, a **Bayesian game** is one in which information about characteristics of the other players (i.e. payoffs) is incomplete. Following John C. Harsanyi's framework, a Bayesian game can be modelled by introducing Nature as a player in a game. Nature assigns a random variable to each player which could take values of *types* for each player and associating probabilities or a probability density function with those types (in the course of the game, **nature** randomly **chooses** a type for each player according to the probability distribution across each player's type space). Harsanyi's approach to modelling a Bayesian game in such a way allows game of incomplete information to become games of imperfect information (in which the history of the game is not available to all players). The type of a player determines that player's payoff function and the probability associated with the type is the probability that the player for whom the type is specified is that type. In a Bayesian game, the incompleteness of information means that at least one player is unsure of the type (and so the payoff function) of another player.

Such games are called *Bayesian* because of the probabilistic analysis inherent in the game. Players have initial beliefs about the type of each player (where a belief is a probability distribution over the possible types for a player) and can update their beliefs according to Bayes' Rule as play takes place in the game, i.e. the belief a player holds about another player's type might change on the basis of the actions they have played. The lack of information held by players and modelling of beliefs mean that such games are also used to analyse imperfect information scenarios.

## Specification of games[]

The normal form representation of a non-Bayesian game with perfect information is a specification of the strategy spaces and payoff functions of players. A strategy for a player is a complete plan of action that covers *every contingency of the game*, even if that contingency can never arise. The strategy space of a player is thus the set of all strategies available to a player. A payoff function is a function from the set of strategy profiles to the set of payoffs (normally the set of real numbers), where a strategy profile is a vector specifying a strategy for every player.

In a Bayesian game, it is necessary to specify the strategy spaces, type spaces, payoff functions and beliefs for every player. A strategy for a player is a complete plan of actions that covers every contingency that might arise for every type that player might be. A strategy must not only specify the actions of the player given the type that he is, but must specify the actions that he would take if he were of another type. Strategy spaces are defined as above. A type space for a player is just the set of all possible *types* of that player. The beliefs of a player describe the uncertainty of that player about the types of the other players. Each belief is the probability of the other players having particular types, given the type of the player with that belief (i.e. the belief is p(types of other players|type of this player)). A payoff function is a 2-place function of strategy profiles and types. If a player has payoff function U(x,y) and he has type t, the payoff he receives is U(x*,t), where x* is the strategy profile played in the game (i.e. the vector of strategies played).

## A signalling example[]

Signalling games constitute an example of Bayesian games. In such a game, the informed party (the *agent*) knows their type, whereas the uninformed party (the *principal*) does not know the (agent's) type. In some such games, it is possible for the principal to deduce the agent's type based on the actions the agent takes (in the form of a signal sent to the principal) in what is known as a *separating equilibrium*. A more specific example of a signalling game is a model of the job market. The players are the applicant (agent) and the employer (principal). There are two types of applicant, skilled and unskilled, but the employer does not know which the applicant is, but he does know that 90% of applicants are unskilled and 10% are skilled (the type 'skilled' occurs with 10% chance and unskilled with 90% chance). The employer will offer the applicant a contract based on how productive he thinks he will be. Skilled workers are very productive (generating a large payoff for the employer) and unskilled workers are unproductive (generating a low payoff for the employer). The payoff of the employer is determined thus by the skill of the applicant (if the applicant accepts a contract) and the wage paid.

The applicant's action space comprises two actions, take a university education or do not. It is less costly for the skilled worker to do so (because he does not pay extra tuition fees, finds classes less taxing, etc.). The employer's action space is the set of (say) natural numbers, which represents the wage of the applicant (the applicant's action space might be extended to include acceptance of a wage, in which case it would be more appropriate to talk of his strategy space). It might be possible for the employer to offer a wage that would compensate a skilled applicant sufficiently for acquiring a university education, but not an unskilled applicant, leading to a separating equilibrium where skilled applicants go to university and unskilled applicants do not, and skilled applicants (workers) command a high wage, whereas unskilled applicants (workers) receive a low wage.

Crucially in the game sketched above, the employer chooses his action (the wage offered) according to his belief about how skilled the applicant is and this belief is determined, in part, by the signal sent by the applicant. The employer starts the game with an initial belief about the applicant's type (unskilled with 90% chance), but during the course of the game this belief may be updated (depending on the payoffs of the different types of applicants) to 0% unskilled if he observes a university education or 100% unskilled if he does not.

## Bayesian Nash equilibrium[]

In a non-Bayesian game, a strategy profile is a Nash equilibrium if every strategy in that profile is a best response to every other strategy in the profile, i.e. there is no strategy that a player could play that would yield a higher payoff, given all the strategies played by the other players. In a Bayesian game (where players are modeled as risk-neutral), rational players are seeking to maximize their expected payoff, given their beliefs about the other players (in the general case, where players may be risk averse or risk-loving, the assumption is that players are expected utility-maximizing). A Bayesian Nash equilibrium is defined as a strategy profile and *beliefs specified for each player about the types of the other players* that maximizes the expected payoff for each player given their beliefs about the other players' types and given the strategies played by the other players.

This solution concept yields implausible equilibria in dynamic games, where no further restrictions are placed on players' beliefs. This makes Bayesian Nash equilibrium a flawed tool with which to analyse dynamic games of incomplete information.

## Perfect Bayesian equilibrium[]

Bayesian Nash equilibrium results in some implausible equilibria in dynamic games, where players take turns sequentially rather than simultaneously. Some implausible equilibria might result from the fact that in a dynamic game, players might reasonably change their beliefs as the game progresses. No procedure for doing so is available in a Bayesian Nash equilibrium. Similarly, implausible equilibria might arise in the same way that implausible Nash equilibria arise in games of perfect and complete information, such as incredible threats and promises. Such equilibria might be eliminated in perfect and complete information games by applying subgame perfect Nash equilibrium. However, it is not always possible to avail oneself of this solution concept in incomplete information games because such games contain non-singleton information sets and since subgames must contain complete information sets, sometimes there is only one subgame - the entire game - and so every Nash equilibrium is trivially subgame perfect. Even if a game does have more than one subgame, the inability of subgame perfection to cut through information sets can result in implausible equilibria not being eliminated.

To refine the equilibria generated by the Bayesian Nash solution concept or subgame perfection, one can apply the **Perfect Bayesian equilibrium** solution concept. PBE is in the spirit of subgame perfection in that it demands that subsequent play be optimal. However, it places player beliefs on decision nodes that enables moves in non-singleton information sets to be dealt with more satisfactorily.

So far in discussing Bayesian games, it has been assumed that information is perfect (or if imperfect, play is simultaneous). In examining dynamic games, however, it might be necessary to have the means to model imperfect information. PBE affords this means: players place beliefs on nodes occurring in their information sets, which means that the information set can be generated by nature (in the case of incomplete information) or by other players (in the case of imperfect information).

### Belief systems[]

The beliefs held by players in Bayesian games can be approached more rigorously in PBE. A belief system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1. The beliefs of a player are exactly those probabilities of the nodes in all the information sets at which that player has the move (a player belief might be specified as a function from the union of his information sets to [0,1]). A belief system is *consistent* for a given strategy profile if and only if the probability assigned by the system to every node is computed as the probability of that node being reached given the strategy profile, i.e. by Bayes' rule.

### Sequential rationality[]

The notion of sequential rationality is what determines the optimality of subsequent play in PBE. A strategy profile is sequentially rational at a particular information set for a particular *belief system* if and only if the expected payoff of the player whose information set it is (i.e. who has the move at that information set) is maximal given the strategies played by all the other players. A strategy profile is sequentially rational for a particular belief system if it satisfies the above for every information set.

### Definition[]

A perfect Bayesian equilibrium is a strategy profile and a belief system such that the strategies are sequentially rational given the belief system and the belief system is *consistent*, wherever possible, given the strategy profile.

It is necessary to stipulate the 'wherever possible' clause because some information sets might not be reached with a non-zero probability given the strategy profile and hence Bayes' rule cannot be employed to calculate the probability at the nodes in those sets. Such information sets are said to be *off the equilibrium path* and any beliefs can be assigned to them.

### An example[]

Information in the game on the left is imperfect since player 2 does not know what player 1 does when he comes to play. If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc. *ad infinitum* - common knowledge), play in the game will be as follows according to perfect Bayesian equilibrium:

Player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into thinking he has played *U* when he has actually played *D* so that player 2 will play *D' * and player 1 will receive 3. In fact in the second game there is a perfect Bayesian equilibrium where player 1 plays *D* and player 2 plays *U' * and player 2 holds the belief that player will definitely play *D* (i.e player places a probability of 1 on the node reached if player 1 plays *D*). In this equilibrium, every strategy is rational given the beliefs held and every belief is consistent with the strategies played. In this case, the perfect Bayesian equilibrium is the only Nash equilibrium.

This page uses Creative Commons Licensed content from Wikipedia (view authors). |