Psychology Wiki

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Social psychology: Altruism · Attribution · Attitudes · Conformity · Discrimination · Groups · Interpersonal relations · Obedience · Prejudice · Norms · Perception · Index · Outline


Social dilemmas are situations in which collective interests are at odds with private interests. Such situations arise when faced with prioritizing either short-term selfish interests or the long-term interests of a group, organization, or society. Many of the most challenging issues, from the interpersonal to the intergroup, are at their core social dilemmas.

Definition[]

Social dilemmas describe situations in which the rational behaviour of an individual--defined in pure and simple economic terms--leads to suboptimal outcomes from the collective standpoint (Dawes, 1980; Kollock, 1998). Researchers frequently use the experimental games method to study social dilemmas in the laboratory. An experimental game is a situation in which participants choose between cooperative and non-cooperative alternatives, yielding consequences for themselves and others. These games are generally depicted with a pay-off matrix representing valuable outcomes for participants like money or lottery tickets. Social dilemmas are in fact a “conflict in which most beneficial action for an individual will, if chosen by most people have a harmful effect on everyone” (Aronson, Wilson, Akert,& Fehr, 2007), or vice versa.

Examples[]

Consider these examples. As individuals we are each better off when we make use of public services such as schools, hospitals, and recreational grounds without contributing to their maintenance. However, if everyone acted according to their narrow self-interest then these resources would not be provided and everyone would be worse off.[1]

Types of social dilemmas[]

The literature on social dilemmas has historically revolved around three metaphorical stories: the Prisoner's Dilemma, the public good Dilemma, and the Tragedy of the Commons (see Commons dilemma); each of these stories has been modelled as an experimental game.

The Prisoner’s Dilemma Game was developed by mathematicians in the 1950s. The cover story for the game involved two prisoners who are separately given the choice between testifying against the other (non-cooperation) or keeping silent (cooperation). The pay-offs are such that each of them is better off testifying against the other but if they both pursue this strategy they are both worse off than by remaining silent.

The Public Good Game has the same properties as the Prisoner’s Dilemma Game but involving more than two individuals. A public good is a resource from which all may benefit regardless of whether they contributed to the good. For instance, people can enjoy the city parks regardless of whether they contributed to their upkeep through local taxes. Public goods are non-excludable: Once these goods are provided nobody can be excluded from using them. As a result, there is a temptation to enjoy the good without making a contribution. Those who do so are called free-riders, and while it is rational to free-ride, if all do so the public good is not provided and all are worse off. Researchers mostly study two public good dilemma games in the laboratory. Participants get a monetary endowment to play these games and decide how much to invest in a private fund versus group fund. Pay-offs are such that it is individually rational to invest in the private fund, yet all would be better off investing in the group fund because this yields a bonus. In the continuous game the more people invest in the group fund the larger their share of the bonus. In the step-level people get a share of the bonus if the total group investments exceed a critical (step) level.

The Commons Dilemma Game is inspired by the metaphor of the Tragedy of the Commons. This story is about a group of herders having open access to a common parcel of land on which their cows graze. It is in each herder’s interest to put as many cows as possible onto the land, even if the common is damaged as a result. The herder receives all the benefits from the additional cows and the damage to the common is shared by the entire group. Yet if all herders make this individually rational decision, the common's carrying capacity is exceeded and it yields less in total or can even be permanently destroyed, and all will suffer. Compare this with the use of non-renewable resources like water or fish: When water is used at a higher rate than the reservoirs are replenished or fish consumption exceeds its reproductive capacity then we face a tragedy of the commons. The experimental commons game involves a common resource pool (filled with money or points) from which individuals harvest without depleting it. It is individually rational to harvest as much as possible, but the resource collapses if people harvest more than the replenishment rate of the pool.

Theories of social dilemmas[]

Game Theory[]

See also: Game theory

Social dilemmas have attracted a great deal of interest in the social and behavioural sciences. Economists, biologists, psychologists, sociologists, and political scientists alike are studying when people are selfish or cooperative in a social dilemma. The most influential theoretical approach is economic game theory (i.e., rational choice theory, expected utility theory). Game theory assumes that individuals are rational actors motivated to maximize their utilities. Utility is often narrowly defined in terms of people’s economic self-interest. Game theory thus predicts a non-cooperative outcome in a social dilemma. Although this is a useful starting premise there are many circumstances in which people may deviate from individual rationality, demonstrating the limitations of economic game theory.

Evolutionary theories[]

Biological and evolutionary approaches provide useful complementary insights into decision-making in social dilemmas. According to selfish gene theory, individuals may pursue a seemingly irrational strategy to cooperate if it benefits the survival of their genes. The concept of inclusive fitness delineates that cooperating with family members might pay because of shared genetic interests. It might be profitable for a parent to help their off-spring because doing so facilitates the survival of their genes. Reciprocity theories provide a different account of the evolution of cooperation. In repeated social dilemma games between the same individuals cooperation might emerge because people can punish a partner for failing to cooperate. This encourages reciprocal cooperation. Reciprocity can explain why people cooperate in dyads but what about larger groups? Evolutionary theories of indirect reciprocity and costly signalling may be useful to explain large-scale cooperation. When people can selectively choose partners to play games with, it pays to develop a cooperative reputation. Through cooperating people signal to others that they are kind and generous which might make them attractive group members.

Psychological theories[]

Psychological models offer additional insights into social dilemmas by questioning the game theory assumption that individuals pursue their narrow self-interest. Social interdependence theory suggests that people transform a given pay-off matrix into an effective matrix that is more consistent with their social dilemma preferences. A prisoner’s dilemma with close kin, for example, changes the pay-off matrix into one in which it is rational to be cooperative. Attribution models offer further support for these transformations. Whether individuals approach a social dilemma selfishly or cooperatively might depend upon whether they believe people are naturally greedy or cooperative. Similarly, goal-expectation theory assumes that people might cooperate under two conditions: They must (1) have a cooperative goal, and (2) expect others to cooperate. Another psychological model, the appropriateness model, questions the game theory assumption that individuals rationally calculate their pay-offs. Instead many people base their decisions on what people around them do and use simple heuristics, like an equality rule, to decide whether or not to cooperate.

Factors promoting cooperation in social dilemmas[]

Studying the conditions under which people cooperate might lead to recommendations to solve social dilemmas in society. The literature distinguishes between three broad classes of solutions—motivational, strategic, and structural—which vary in whether they see actors as motivated purely by self-interest and in whether they change the rules of the social dilemma game.

Motivational solutions[]

Motivational solutions assume that people have other-regarding preferences. There is a considerable literature on social values which shows that people have stable preferences for how much they value outcomes for self versus others. Research has concentrated on three social motives: (1) individualism—maximizing own outcomes regardless of others; (2) competition—maximizing own outcomes relative to others; and (3) cooperation—maximizing joint outcomes. The first two orientations are referred to as proself orientations and the third as a prosocial orientation. There is much support for the idea that prosocial and proself individuals behave differently when confronted with a social dilemma in the laboratory as well as field.[citation needed] People with prosocial orientations weigh the moral implications of their decisions more and see cooperation as the most preferable choice in a social dilemma. When there are conditions of scarcity, like a water shortage, prosocials harvest less from a common resource. Similarly prosocials are more concerned about the environmental consequences of, for example, taking the car or public transport.[2]

Research on the development of social value orientations suggest an influence of factors like family history (prosocials have more sibling sisters), age (older people are more prosocial), culture (more individualists in Western cultures), gender (more women are prosocial), even university course (economics students are less prosocial).[citation needed] However, until we know more about the psychological mechanisms underlying these social value orientations we lack a good basis for interventions.

Many people also have group-regarding preferences (social identity). People’s group association is a powerful predictor of their social dilemma behaviour. When people highly identify with a group they contribute more to public goods and harvest less from common resources.[citation needed] Group identifications have even more striking effects when there is intergroup competition. When social dilemmas involve two or more groups of players there is much less cooperation than when individuals play. Yet intergroup competition also facilitates intragroup cooperation, especially among men. When a resource is depleting rapidly, people are more willing to compensate for selfish decisions from ingroup members than from outgroup members.[citation needed] Furthermore, the free-rider problem is much less pronounced when there is intergroup competition.[3] However, intergroup competition can be a double-edged sword. Encouraging competition between groups might serve the temporary needs of ingroup members but the social costs of intergroup conflicts can be severe for either group. [citation needed]It is not entirely clear why people cooperate more as part of a group. One possibility is that people become genuinely more altruistic. Other possibilities are that people are more concerned about their ingroup reputation or are more likely to expect returns from ingroup than outgroup members. This needs further investigation.

Another factor that might affect the weight individuals assign to group outcomes is the possibility of communication. A robust finding in the social dilemma literature is that cooperation increases when people are given a chance to talk to each other.[citation needed] It has been quite a challenge to explain this effect. One motivational reason is that communication reinforces a sense of group identity.[4]

Another reason is that communication offers an opportunity for moral suasion so that people are exposed to arguments to do what is morally right.[citation needed]

But there may be strategic considerations as well. First, communication gives group members a chance to make promises and explicit commitments about what they will do. Yet it is not clear if many people stick to their promises to cooperate. Similarly through communication people are able to gather information about what others do. However, in social dilemmas this information might produce ambiguous results: If I know that most people cooperate I may be tempted to act selfishly?

Strategic solutions[]

A second category of solutions are primarily strategic. In repeated interactions cooperation might emerge when people adopt a Tit for tat strategy (TFT). TFT is characterized by making a first cooperative move while the next move mimics the decision of the partner. Thus, if a partner does not cooperate, you copy this move until your partner starts to cooperate. Computer tournaments in which different strategies were pitted against each other showed TFT to be the most successful strategy in social dilemmas. TFT is a common strategy in real-world social dilemmas because it is nice but firm. Think, for instance, about marriage contracts, rental agreements, and international trade policies that all use TFT-tactics.

However, TFT is quite an unforgiving strategy and in noisy real-world dilemmas a more forgiving strategy might be better. Such a strategy is known as Generous-tit-for-tat (GTFT).[5] This strategy always reciprocates cooperation with cooperation, and usually replies to defection with defection. However, with some probability GTFT with forgive a defection by the other player and cooperate. In a world of errors in action and perception, such a strategy can be a Nash equilibrium and evolutionarily stable. The more beneficial cooperation is, the most forgiving GTFT can be while still resisting invasion by defectors.

Even when partners might not meet again it could be strategically wise to cooperate. When people can selectively choose who to interact with it might pay to be seen as a cooperator. Research shows that cooperators create better opportunities for themselves than non-cooperators: They are selectively preferred as collaborative partners, romantic partners, and group leaders. This only occurs however when people’s social dilemma choices are monitored by others. Public acts of altruism and cooperation like charity giving, philanthropy, and bystander intervention are probably manifestations of reputation-based cooperation.[citation needed]

Structural solutions[]

Structural solutions change the rules of the game either through modifying the social dilemma or removing the dilemma altogether. Not surprisingly, many studies have shown that cooperation rates go up the higher the relative pay-offs for cooperation.[citation needed] Field research on conservation behaviour has shown that selective incentives in the form of monetary rewards are effective in decreasing domestic water and electricity use.[citation needed] Furthermore, experimental studies show that cooperation is more likely if individuals have the ability to punish defectors.[citation needed] Yet implementation of reward and punishment systems can be problematic for various reasons. First, there are significant costs associated with creating and administering sanction systems. Providing selective rewards and punishments requires support institutions to monitor the activities of both cooperators and non-cooperators, which can be quite expensive to maintain. Second, these systems are themselves public goods because one can enjoy the benefits of a sanctioning system without contribution to its existence. The police, army, and judicial system will fail to operate unless people are willing to pay taxes to support them. This raises the question if many people want to contribute to these institutions. Experimental research suggests that particularly low trust individuals are willing to invest money in punishment systems.[6] A considerable portion of people are quite willing to punish non-cooperators even if they personally do not profit. Some researchers even suggest that altruistic punishment is an evolved mechanism for human cooperation. A third limitation is that punishment and reward systems might undermine people’s voluntary cooperative intention. Some people get a “warm glow” from cooperation and the provision of selective incentives might crowd out their cooperative intention. Similarly the presence of a negative sanctioning system might undermine voluntary cooperation. Research has found that punishment systems decrease the trust that people have in others. Thus, sanctioning is a delicate strategy.

Boundary structural solutions modify the social dilemma structure and such strategies are often very effective. An often studied solution is the establishment of a leader or authority to manage a social dilemma.[citation needed] Experimental studies on commons dilemmas show that overharvesting groups are more willing to appoint a leader to look after the common resource. There is a preference for a democratically elected prototypical leader with limited power especially when people’s group ties are strong.[7] When ties are weak, groups prefer a stronger leader with a coercive power base. The question remains whether authorities can be trusted in governing social dilemmas and field research shows that legitimacy and fair procedures are extremely important in citizen’s willingness to accept authorities.

Another structural solution is reducing group size. Cooperation generally declines when group size increases. In larger groups people often feel less responsible for the common good and believe, rightly or wrongly, that their contribution does not matter. Reducing the scale—for example through dividing a large scale dilemma into smaller more manageable parts—might be an effective tool in raising cooperation.

Another proposed boundary solution is to remove the social from the dilemma, by means of privatization. People are often better in managing a private resource than a resource shared with many others. However it is not easy to privatize moveable resources such as fish, water, and clean air. Privatization also raises concerns about social justice as not everyone may be able to get an equal share. Finally, privatization might erode people’s intrinsic motivation to cooperate.[citation needed]

Conclusions[]

As social dilemmas in society become more pressing there is an increasing need for policies. It is encouraging that much social dilemma research is applied to areas such as organizational welfare, public health, local and global environmental change. The emphasis is shifting from pure laboratory research towards research testing combinations of motivational, strategic, and structural solutions. It is also noteworthy that social dilemmas is an interdisciplinary research field with participation from researchers from various behavioural sciences who are developing unifying theoretical frameworks to study social dilemmas (like evolutionary theory). For instance, there is a burgeoning neuroeconomics literature studying brain correlates of decision-making in social dilemmas with neuroscience methods. Finally, social dilemma researchers are increasingly using more dynamical experimental designs to see, for instance, what happens if people can voluntarily or involuntarily enter or exit a social dilemma, or play different social dilemmas at the same time within different groups.

See also[]

References[]

  1. Van Vugt, M. (2009). Averting the Tragedy of the Commons: Using Social Psychological Science to Protect the Environment. Current Directions in Psychological Science 18 (3): 169–173.
  2. Van Vugt, M. (1995). Car versus public transportation? The role of social value orientations in a real-life social dilemma. Journal of Applied Social Psychology 25 (3): 358–378.
  3. De Cremer, D. (1999). Social identification effects in social dilemmas: A transformation of motives. European Journal of Social Psychology 29 (7): 871–893.
  4. Orbell, John M. (1988). Explaining discussion-induced cooperation. Journal of Personality and Social Psychology 54 (5): 811–819.
  5. Nowak, M. A. (1992). Tit for tat in heterogeneous populations. Nature 355 (6357): 250–253.
  6. Yamagishi, T. (1986). The Provision of a Sanctioning System as a Public Good. Journal of Personality and Social Psychology 51 (1): 110—116.
  7. Van Vugt, M. (1999). Leadership in social dilemmas: The effects of group identification on collective actions to provide public goods. Journal of Personality and Social Psychology 76 (4): 587–599.

Further reading[]

  • Axelrod, R. A. (1984). The evolution of cooperation, New York: Basic Books.
  • Batson, D. (2008). Altruism: Myth or Reality?. In-Mind Magazine 6.
  • Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology 31: 169–193.
  • Dawes, R. M. (2000). Social Dilemmas. International Journal of Psychology 35 (2): 111–116.
  • Kollock, P. (1998). Social dilemmas: Anatomy of cooperation. Annual Review of Sociology 24: 183–214.
  • Komorita, S. (1994). Social Dilemmas, Boulder, CO: Westview Press.
  • Messick, D. M. (1983). "Solving social dilemmas: A review" Review of personality and social psychology, 11–44, Beverly Hills, CA: Sage.
  • Nowak, M. A. (1992). Tit for tat in heterogeneous populations. Nature 355 (6357): 250–253.
  • Palfrey, Thomas R. (1988). Private Incentives in Social Dilemmas: The Effects of Incomplete Information and Altruism. Journal of Public Economics 35 (3): 309–332.
  • Ridley, M. (1997). Origins of virtue, London: Penguin Classics.
  • Schneider, S. K. (1999). Three social dilemmas of workforce diversity in organizations: A social identity perspective. Human Relations 52 (11): 1445–1468.
  • Van Lange, P. A. M. (1997). Development of prosocial, individualistic, and competitive orientations: Theory and preliminary evidence. Journal of Personality and Social Psychology 73 (4): 733–746.
  • Van Vugt, M. (1999). Leadership in social dilemmas: The effects of group identification on collective actions to provide public goods. Journal of Personality and Social Psychology 76 (4): 587–599.
  • Weber, M. (2004). A conceptual review of social dilemmas: Applying a logic of appropriateness. Personality and Social Psychology Review 8: 281–307.
  • Yamagishi, T. (1986). "The structural goal/expectation theory of cooperation in social dilemmas" Advances in group processes, 51–87, Greenwich, CT: JAI Press.


External links[]


This page uses Creative Commons Licensed content from Wikipedia (view authors).