Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Social psychology: Altruism · Attribution · Attitudes · Conformity · Discrimination · Groups · Interpersonal relations · Obedience · Prejudice · Norms · Perception · Index · Outline
The evolution of cooperation can refer to:
- the study of how cooperation can emerge and persist (also known as cooperation theory) as elucidated by application of game theory,
- a 1981 paper by political scientist Robert Axelrod and evolutionary biologist W. D. Hamilton (Axelrod & Hamilton 1981) in the scientific literature, or
- a 1984 book by Axelrod (Axelrod 1984)[1] that expanded on the paper and popularized the study.
This article is an introduction to how game theory and computer modeling are illuminating certain aspects of moral and political philosophy, particularly the role of individuals in groups, the "biology of selfishness and altruism",[2] and how cooperation can be evolutionarily advantageous.
Operations research[]
The idea that human behavior can be usefully analyzed mathematically gained great credibility following the application of operations research in World War II to improve military operations. One famous example involved how the Royal Air Force hunted submarines in the Bay of Biscay.[3] It had seemed to make sense to patrol the areas where submarines were most frequently seen. Then it was pointed out that "seeing the most submarines" depended not only on the number of submarines present, but also on the number of eyes looking; i.e., patrol density. Making an allowance for patrol density showed that patrols were more efficient – that is, found more submarines per patrol – in other areas. Making appropriate adjustments increased the overall effectiveness.
Game theory[]
Accounts of the success of operations research during the war, publication in 1944 of John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior (Von Neumann & Morgenstern 1944) on the use of game theory for developing and analyzing optimal strategies for military and other uses, and publication of John William's The Compleat Strategyst, a popular exposition of game theory,[4] led to a greater appreciation of mathematical analysis of human behavior.[5]
But game theory had a little crisis: it could not find a strategy for a simple game called "The Prisoner's Dilemma" (PD) where two players have the option to cooperate for mutual gain, but each also takes a risk of being suckered.
Prisoner's dilemma[]
The prisoner's dilemma game[6] (invented around 1950 by Merrill Flood and Melvin Dresher[7]) takes its name from the following scenario: you and a criminal associate have been busted. Fortunately for you, most of the evidence was shredded, so you are facing only a year in prison. But the prosecutor wants to nail someone, so he offers you a deal: if you squeal on your associate – which will result in his getting a five year stretch – the prosecutor will see that six months is taken off of your sentence. Which sounds good, until you learn your associate is being offered the same deal – which would get you five years.
So what do you do? The best that you and your associate can do together is to not squeal: that is, to cooperate (with each other, not the prosecutor!) in a mutual bond of silence, and do your year. But wait: if your associate cooperates (that sucker!), can you do better by squealing ("defecting") to get that six month reduction? It's tempting, but then he's also tempted. And if you both squeal, oh, no, it's four and half years each. So perhaps you should cooperate – but wait, that's being a sucker yourself, as your associate will undoubtedly defect, and you won't even get the six months off. So what is the best strategy to minimize your incarceration (aside from going straight in the first place)?
To cooperate, or not cooperate? This simple question (and the implicit question of whether to trust, or not), expressed in an extremely simple game, is a crucial issue across a broad range of life. Why shouldn't a shark eat the little fish that has just cleaned it of parasites: in any given exchange who would know? Fig wasps collectively limit the eggs they lay in fig trees (otherwise, the trees would suffer). But why shouldn't any one fig wasp cheat and leave a few more eggs than her rivals? At the level of human society, why shouldn't each of the villagers that share a common but finite resource try to exploit it more than the others?[8] At the core of these and myriad other examples is a conflict formally equivalent to the Prisoner's Dilemma. Yet sharks, fig wasps, and villagers all cooperate. It has been a vexatious problem in evolutionary studies to explain how such cooperation should evolve, let alone persist, in a world of self-maximizing egoists.
Darwinian context[]
Charles Darwin's theory of how evolution works ("By Means of Natural Selection"[9]) is explicitly competitive ("survival of the fittest"), Malthusian ("struggle for existence"), even gladiatorial ("nature, red in tooth and claw"). Species are pitted against species for shared resources, similar species with similar needs and niches even more so, and individuals within species most of all.[10] All this comes down to one factor: out-competing all rivals and predators in producing progeny.
Darwin's explanation of how preferential survival of the slightest benefits can lead to advanced forms is the most important explanatory principle in biology, and extremely powerful in many other fields. Such success has reinforced notions that life is in all respects a war of each against all, where every individual has to look out for himself, that your gain is my loss.
In such a struggle for existence altruism (voluntarily yielding a benefit to a non-relative) and even cooperation (working with another for a mutual benefit) seem so antithetical to self-interest as to be the very kind of behavior that should be selected against. Yet cooperation and seemingly even altruism have evolved and persist, and naturalists have been hard pressed to explain why.
Social Darwinism[]
The popularity of the evolution of cooperation – the reason it is not an obscure technical issue of interest to only a small number of specialists – is in part because it mirrors a larger issue where the realms of political philosophy, ethics, and biology intersect: the ancient issue of individual interests versus group interests. On one hand, the so-called "Social Darwinians" (roughly, those who would use the "survival of the fittest" of Darwinian evolution to justify the cutthroat competitiveness of laissez-faire capitalism[11]) declaim that the world is an inherently competitive "dog eat dog" jungle, where every individual has to look out for himself. The philosopher Ayn Rand damned "altruism" and declared selfishness a virtue.[12] The Social Darwinists' view is derived from Charles Darwin's interpretation of evolution by natural selection, which is explicitly competitive ("survival of the fittest"), Malthusian ("struggle for existence"), even gladiatorial ("red in tooth and claw"), and permeated by the Victorian laissez-faire ethos of Darwin and his disciples (such as T. H. Huxley and Herbert Spencer). What they read into the theory was then read out by Social Darwinians as scientific justification for their social and economic views (such as poverty being a natural condition and social reform an unnatural meddling).[13]
Such views of evolution, competition, and the survival of the fittest are explicit in the ethos of modern capitalism, as epitomized by industrialist Andrew Carnegie in The Gospel of Wealth:
[W]hile the law [of competition] may be sometimes hard for the individual, it is best for the race, because it ensures the survival of the fittest in every department. We accept and welcome, therefore, as conditions to which we must accommodate ourselves, great inequality of environment; the concentration of business, industrial and commercial, in the hands of the few; and the law of competition between these, as being not only beneficial, but essential to the future progress of the race. (Carnegie 1900)
While the validity of extrapolating moral and political views from science is questionable, the significance of such views in modern society is undoubtable.
The social contract and morality[]
On the other hand, other philosophers have long observed that cooperation in the form of a "social contract" is necessary for human society, but saw no way of attaining that short of a coercive authority.
As Thomas Hobbes wrote in Leviathan:
[T]here must be some coercive power to compel men equally to the performance of their covenants by the terror of some punishment greater than the benefit they expect by the breach of their covenant.... (Hobbes 1651, p. 120)
[C]ovenants without the sword are but words.... (Hobbes 1651, p. 1139)
And Jean Jacques Rousseau in The Social Contract:
[The social contract] can arise only where several persons come together: but, as the force and liberty of each man are the chief instruments of his self-preservation, how can he pledge them without harming his own interests, and neglecting the care he owes himself? (Rousseau 1762, p. 13)
In order then that the social compact may not be an empty formula, it tacitly includes the undertaking, which alone can give force to the rest, that whoever refuses to obey the general will shall be compelled to do so by the whole body. This means nothing less than that he will be forced to be free.... (Rousseau 1762, p. 18)
Even Herman Melville, in Moby-Dick, has the cannibal harpooner Queequeg explain why he has saved the life of someone who had been jeering him as so:
"It's a mutual, joint-stock world, in all meridians. We cannibals must help these Christians." (Melville 1851, p. 96)
The original role of government is to provide the coercive power to enforce the social contract (and in commercial societies, contracts and covenants generally). Where government does not exist or cannot reach it is often deemed the role of religion to promote prosocial and moral behavior, but this tends to depend on threats of hell-fire (what Hobbes called "the terror of some power"); such inducements seem more mystical than rational, and philosophers have been hard-pressed to explain why self-interest should yield to morality, why there should be any duty to be "good".[14]
Yet cooperation, and even altruism and morality, are prevalent, even in the absence of coercion, even though it seems that a properly self-regarding individual should reject all such social strictures and limitations. As early as 1890 the Russian naturalist Petr Kropotkin observed that the species that survived were where the individuals cooperated, that "mutual aid" (cooperation) was found at all levels of existence.[15] By the 1960s biologists and zoologists were noting many instances in the real "jungle" where real animals – presumably unfettered by conscience and not corrupted by altruistic liberals – and even microbes (see microbial cooperation) were cooperating.[16]
Darwin's theory of natural selection is a profoundly powerful explanation of how evolution works; its undoubted success strongly suggests an inherently antagonistic relationship between unrelated individuals. Yet cooperation is prevalent, seems beneficial, and even seems to be essential to human society. Explaining this seeming contradiction, and accommodating cooperation, and even altruism, within Darwinian theory is a central issue in the theory of cooperation.
Modern developments[]
Darwin's explanation of how evolution works is quite simple, but the implications of how it might explain complex phenomena are not at all obvious; it has taken over a century to elaborate (see modern synthesis).[17] Explaining how altruism – which by definition reduces personal fitness – can arise by natural selection is a particular problem, and the central theoretical problem of sociobiology.[18]
A possible explanation of altruism is provided by the theory of group selection (first suggested by Darwin himself while grappling with issue of social insects[19]) which argues that natural selection can act on groups: groups that are more successful – for any reason, including learned behaviors – will benefit the individuals of the group, even if they are not related. It has had a powerful appeal, but has not been fully persuasive, in part because of difficulties regarding cheaters that participate in the group without contributing.[20]
Another explanation is provided by the genetic kinship theory of William D. Hamilton:[21] if a gene causes an individual to help other individuals that carry copies of that gene, then the gene has a net benefit even with the sacrifice of a few individuals. The classic example is the social insects, where the workers – which are sterile, and therefore incapable of passing on their genes – benefit the queen, who is essentially passing on copies of "their" genes. This is further elaborated in the "selfish gene" theory of Richard Dawkins, that the unit of evolution is not the individual organism, but the gene.[22] (As stated by Wilson: "the organism is only DNA's way of making more DNA."[23]) However, kinship selection works only where the individuals involved are closely related; it fails to explain the presence of altruism and cooperation between unrelated individuals, particularly across species.
In a 1971 paper[24] Robert Trivers demonstrated how reciprocal altruism can evolve between unrelated individuals, even between individuals of entirely different species. And the relationship of the individuals involved is exactly analogous to the situation in a certain form of the Prisoner's Dilemma.[25] The key is that in the iterated Prisoner's Dilemma, or IPD, both parties can benefit from the exchange of many seemingly altruistic acts. As Trivers says, it "take[s] the altruism out of altruism."[26] The Randian premise that self-interest is paramount is largely unchallenged, but turned on its head by recognition of a broader, more profound view of what constitutes self-interest.
It does not matter why the individuals cooperate. The individuals may be prompted to the exchange of "altruistic" acts by entirely different genes, or no genes in particular, but both individuals (and their genomes) can benefit simply on the basis of a shared exchange. In particular, "the benefits of human altruism are to be seen as coming directly from reciprocity – not indirectly through non-altruistic group benefits".[27]
Trivers' theory is very powerful. Not only can it replace group selection, it also predicts various observed behavior, including moralistic aggression,[28] gratitude and sympathy, guilt and reparative altruism,[29] and development of abilities to detect and discriminate against subtle cheaters.
The benefits of such reciprocal altruism was dramatically demonstrated by a pair of tournaments held by Robert Axelrod around 1980.
Axelrod's tournaments[]
Axelrod initially solicited strategies from other game theorists to compete in the first tournament. Each strategy was paired with each other strategy for 200 iterations of a Prisoner's Dilemma game, and scored on the total points accumulated through the tournament. The winner was a very simple strategy submitted by Anatol Rapoport called "TIT FOR TAT" (TFT) that cooperates on the first move, and subsequently echoes (reciprocates) what the other player did on the previous move. The results of the first tournament were analyzed and published, and a second tournament held to see if anyone could find a better strategy. TIT FOR TAT won again. Axelrod analyzed the results, and made some interesting discoveries about the nature of cooperation, which he describes in his book[30]
In both actual tournaments and various replays the best performing strategies were nice:[31] that is, they were never the first to defect. Many of the competitors went to great lengths to gain an advantage over the "nice" (and usually simpler) strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. TFT (and other "nice" strategies generally) "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the other's weakness."[32]
Being "nice" can be beneficial, but it can also lead to being suckered. To obtain the benefit – or avoid exploitation – it is necessary to be provocable to both retaliation and forgiveness. When the other player defects, a nice strategy must immediately be provoked into retaliatory defection.[33] The same goes for forgiveness: return to cooperation as soon as the other player does. Overdoing the punishment risks escalation, and can lead to an "unending echo of alternating defections" that depresses the scores of both players.[34]
Most of the games that game theory had heretofore investigated are "zero-sum" – that is, the total rewards are fixed, and a player does well only at the expense of other players. But real life is not zero-sum. Our best prospects are usually in cooperative efforts. In fact, TFT cannot score higher than its partner; at best it can only do "as good as". Yet it won the tournaments by consistently scoring a strong second-place with a variety of partners.[35] Axelrod summarizes this as don't be envious;[36] in other words, don't strive for a payoff greater than the other player's.[37]
In any IPD game there is a certain maximum score each player can get by always cooperating. But some strategies try to find ways of getting a little more with an occasional defection (exploitation). This can work against some strategies that are less provocable or more forgiving than TIT FOR TAT, but generally they do poorly. "A common problem with these rules is that they used complex methods of making inferences about the other player [strategy] – and these inferences were wrong."[38] Against TFT (and "nice" strategies generally) one can do no better than to simply cooperate.[39] Axelrod calls this clarity. Or: don't be too clever.[40]
The success of any strategy depends on the nature of the particular strategies it encounters, which depends on the composition of the overall population. To better model the effects of reproductive success Axelrod also did an "ecological" tournament, where the prevalence of each type of strategy in each round was determined by that strategy's success in the previous round. The competition in each round becomes stronger as weaker performers are reduced and eliminated. The results were amazing: a handful of strategies – all "nice" – came to dominate the field.[41] In a sea of non-nice strategies the "nice" strategies – provided they were also provokable – did well enough with each other to offset the occasional exploitation. As cooperation became general the non-provocable strategies were exploited and eventually eliminated, whereupon the exploitive (non-cooperating) strategies were out-performed by the cooperative strategies.
In summary, success in an evolutionary "game" correlated with the following characteristics:
- Be nice: cooperate, never be the first to defect.
- Be provocable: return defection for defection, cooperation for cooperation.
- Don't be envious:: be fair with your partner.
- Don't be too clever: or, don't try to be tricky.
Foundation of reciprocal cooperation[]
The lessons described above apply in environments that support cooperation, but whether cooperation is supported at all depends crucially on the probability (called ω [omega]) that the players will meet again,[42] also called the discount parameter or, poetically, the shadow of the future. When ω is low – that is, the players have a negligible chance of meeting again – each interaction is effectively a single-shot Prisoner's Dilemma game, and one might as well defect in all cases (a strategy called "ALL D"), because even if one cooperates there is no way to keep the other player from exploiting that. But in the iterated PD the value of repeated cooperative interactions can become greater than the benefit/risk of a single exploitation (which is all that a strategy like TFT will tolerate).
Curiously, rationality and deliberate choice are not necessary, nor trust nor even consciousness,[43] as long as there is a pattern that benefits both players (e.g., increases fitness), and some probability of future interaction. Often the initial mutual cooperation is not even intentional, but having "discovered" a beneficial pattern both parties respond to it by continuing the conditions that maintain it.
This implies two requirements for the players, aside from whatever strategy they may adopt. First, they must be able to recognize other players, to avoid exploitation by cheaters. Second, they must be able to track their previous history with any given player, in order to be responsive to that player's strategy.[44]
Even when the discount parameter ω is high enough to permit reciprocal cooperation there is still a question of whether and how cooperation might start. One of Axelrod's findings is that when the existing population never offers cooperation nor reciprocates it – the case of ALL D – then no nice strategy can get established by isolated individuals; cooperation is strictly a sucker bet. (The "futility of isolated revolt".[45]) But another finding of great significance is that clusters of nice strategies can get established. Even a small group of individuals with nice strategies with infrequent interactions can yet do so well on those interactions to make up for the low level of exploitation from non-nice strategies.[46]
Subsequent work[]
In 1984 Axelrod estimated that there were "hundreds of articles on the Prisoner's Dilemma cited in Psychological Abstracts",[47] and estimated that citations to The Evolution of Cooperation alone were "growing at the rate of over 300 per year".[48] To fully review this literature is infeasible. What follows are therefore only a few selected highlights.
Axelrod has a subsequent book, The Complexity of Cooperation,[49] which he considers a sequel to The Evolution of Cooperation. Other work on the evolution of cooperation has expanded to cover prosocial behavior generally,[50] and in religion,[51] other mechanisms for generating cooperation,[52] the IPD under different conditions and assumptions,[53] and the use of other games such as the Public Goods and Ultimatum games to explore deep-seated notions of fairness and fair play.[54] It has also been used to challenge the rational and self-regarding "economic man" model of economics,[55] and as a basis for replacing Darwinian sexual selection theory with a theory of social selection.[56]
Nice strategies are better able to invade if they have social structures or other means of increasing their interactions. Axelrod discusses this in chapter 8; in a later paper he and Rick Riolo and Michael Cohen[57] use computer simulations to show cooperation rising among agents who have negligible chance of future encounters but can recognize similarity of an arbitrary characteristic (such as a green beard).
When an IPD tournament introduces noise (errors or misunderstandings) TFT strategies can get trapped into a long string of retaliatory defections, thereby depressing their score. TFT also tolerates "ALL C" (always cooperate) strategies, which then give an opening to exploiters.[58] In 1992 Martin Nowak and Karl Sigmund demonstrated a strategy called Pavlov (or "win–stay, lose–shift") that does better in these circumstances.[59] Pavlov looks at its own prior move as well as the other player's move. If the payoff was R or P (see "Prisoner's Dilemma", above) it cooperates; if S or T it defects.
In a 2006 paper Nowak listed five mechanisms by which natural selection can lead to cooperation.[60] In addition to kin selection and direct reciprocity, he shows that:
- Indirect reciprocity is based on knowing the other player's reputation, which is the player's history with other players. Cooperation depends on a reliable history being projected from past partners to future partners.
- Network reciprocity relies on geographical or social factors to increase the interactions with nearer neighbors; it is essentially a virtual group.
- Group selection[61] assumes that groups with cooperators (even altruists) will be more successful as a whole, and this will tend to benefit all members.
The payoffs in the Prisoner's Dilemma game are fixed, but in real life defectors are often punished by cooperators. Where punishment is costly there is a second-order dilemma amongst cooperators between those who pay the cost of enforcement and those who do not.[62] Other work has shown that while individuals given a choice between joining a group that punishes free-riders and one that does not initially prefer the sanction-free group, yet after several rounds they will join the sanctioning group, seeing that sanctions secure a better payoff.[63]
In small populations or groups there is the possibility that indirect reciprocity (reputation) can interact with direct reciprocity (e.g. tit for tat) with neither strategy dominating the other.[64] The interactions between these strategies can give rise to dynamic social networks which exhibit some of the properties observed in empirical networks [65]
And there is the very intriguing paper "The Coevolution of Parochial Altruism and War" by Jung-Kyoo Choi and Samuel Bowles. From their summary:
Altruism—benefiting fellow group members at a cost to oneself —and parochialism—hostility towards individuals not of one's own ethnic, racial, or other group—are common human behaviors. The intersection of the two—which we term "parochial altruism"—is puzzling from an evolutionary perspective because altruistic or parochial behavior reduces one's payoffs by comparison to what one would gain from eschewing these behaviors. But parochial altruism could have evolved if parochialism promoted intergroup hostilities and the combination of altruism and parochialism contributed to success in these conflicts.... [Neither] would have been viable singly, but by promoting group conflict they could have evolved jointly.[66]
They do not claim that humans have actually evolved in this way, but that computer simulations show how war could be promoted by the interaction of these behaviors.
Conclusion[]
When Richard Dawkins set out to "examine the biology of selfishness and altruism" in The Selfish Gene, he reinterpreted the basis of evolution, and therefore of altruism. He was "not advocating a morality based on evolution",[67] and even felt that "we must teach our children altruism, for we cannot expect it to be part of their biological nature."[68] But John Maynard Smith[69] was showing that behavior could be subject to evolution, Robert Trivers had shown that reciprocal altruism is strongly favored by natural selection to lead to complex systems of altruistic behavior (supporting Kropotkin's argument that cooperation is as much a factor of evolution as competition[70]), and Axelrod's dramatic results showed that in a very simple game the conditions for survival (be "nice", be provocable, promote the mutual interest) seem to be the essence of morality. While this does not yet amount to a science of morality, the game theoretic approach has clarified the conditions required for the evolution and persistence of cooperation, and shown how Darwinian natural selection can lead to complex behavior, including notions of morality, fairness, and justice. It is shown that the nature of self-interest is more profound than previously considered, and that behavior that seems altruistic may, in a broader view, be individually beneficial. Extensions of this work to morality[71] and the social contract[72] may yet resolve the old issue of individual interests versus group interests.
Recommended Reading[]
- Axelrod, Robert; Hamilton, William D. (27 March 1981), "The Evolution of Cooperation", Science 211: 1390–96, doi: , PMID 7466396, Bibcode: 1981Sci...211.1390A, http://www-personal.umich.edu/~axe/research/Axelrod%20and%20Hamilton%20EC%201981.pdf
- Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books, ISBN 0-465-02122-0
- Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Perseus Books Group, ISBN 0-465-00564-0
- Axelrod, Robert (1997), The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration, Princeton University Press, ISBN 0-691-01567-8
- Dawkins, Richard ([1976] 1989), The Selfish Gene (2nd ed.), Oxford Univ. Press, ISBN 0-19-286092-5
- Gould, Stephen Jay (June 1997), "Kropotkin was no crackpot", Natural History 106: 12–21, http://www.marxists.org/subject/science/essays/kropotkin.htm
- Ridley, Matt (1996), The Origins of Virtue, Viking (Penguin Books), ISBN 0-670-86357-2
- Sigmund, Karl; Fehr, Ernest; Nowak, Martin A. (January 2002), "The Economics of Fair Play", Scientific American: 82–87, http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/SciAm02.pdf
- Trivers, Robert L. (March 1971), "The Evolution of Reciprocal Altruism", Quarterly Review of Biology 46: 35–57, doi:, http://lis.epfl.ch/~markus/References/Trivers71.pdf
- Vogel, Gretchen (20 February 2004), "News Focus: The Evolution of the Golden Rule", Science 303 (5661): 1128–31, doi:, http://www.sciencemag.org/cgi/reprint/303/5661/1128.pdf
See also[]
- Co-operation (evolution)
Notes[]
- ↑ Axelrod's book was summarized in Douglas Hofstadter's May 1983 "Metamagical Themas" column in Scientific American (Hofstadter 1983) (reprinted in his book (Hofstadter 1985); see also Richard Dawkin's summary in the second edition of The Selfish Gene (Dawkins 1989, ch. 12).
- ↑ Dawkins 1989, p. 1.
- ↑ Template:Harvs
- ↑ Template:Harvs
- ↑ See Poundstone (1992) for a good overview of the development of game theory.
- ↑ Technically, the prisoner's dilemma is any two-person "game" where the payoffs are ranked in a certain way. If the payoff ("reward") for mutual cooperation is R, for mutual defection is P, the sucker gets only S, and the temptation payoff (provided the other player is suckered into cooperating) is T, then the payoffs need to be ordered T > R > P > S, and satisfy R > (T+S)/2. (Axelrod 1984, pp. 8–10, 206–207).
- ↑ Axelrod 1984, p. 216 n. 2; Poundstone 1992.
- ↑ See Hardin (1968), "The Tragedy of the Commons".
- ↑ "By Means of Natural Selection" being the subtitle of his work, On the Origin of Species.
- ↑ Darwin 1859, pp 75, 76, 320.
- ↑ Bowler 1984, pp. 94–99, 269–70.
- ↑ Rand 1961.
- ↑ Bowler 1984, pp. 94–99
- ↑ See Gauthier 1970 for a lively debate on morality and self-interest. Aristotle's comment on the effectivness of philosophic argument: "For the many yield to complusion more than to argument." (Nichomachean Ethics, Book X, 1180a15, Irwin translation)
- ↑ Kropotkin 1902, but originally published in the magazine Nineteenth Century starting in 1890.
- ↑ Axelrod 1984, pp. 90; Trivers 1971.
- ↑ See Bowler (1984) generally.
- ↑ Wilson 1975.
- ↑ Darwin 1859, p. 237.
- ↑ Axelrod & Hamilton 1981; Trivers 1971, pp. 44, 48; Bowler 1984, p. 312; Dawkins 1989, pp. 7–10, 287, ch. 7 generally.
- ↑ Hamilton 1964.
- ↑ Dawkins 1989, p. 11.
- ↑ Wilson 1975, p. 3.
- ↑ Trivers 1971.
- ↑ Trivers 1971, pp. 38–39.
- ↑ Trivers 1971, p. 35.
- ↑ Trivers 1971, p. 47. More pointedly, Trivers also said (p. 48):"No concept of group advantage is necessary to explain the function of human altruistic behavior."
- ↑ To deter cheaters from exploiting altruists. And "in extreme cases, perhaps, to select directly against the unreciprocating individual by injuring, killing, or exiling him." (Trivers 1971, p. 49)
- ↑ Analogous to the situation in the IPD where, having once defected, a player voluntarily elects to cooperate, even in anticipation of being suckered, in order to return to a state of mutual cooperation. As Trivers says (p. 50): "It seems plausible ... that the emotion of guilt has been selected for in humans partly in order to motivate the cheater to compensate his misdeed and to behave reciprocally in the future...."
- ↑ Axelrod 1984.
- ↑ Axelrod 1984, p. 113.
- ↑ Axelrod 1984, p. 130.
- ↑ Axelrod 1984, pp. 62, 211.
- ↑ Axelrod 1984, p. 186.
- ↑ Axelrod 1984, p. 112.
- ↑ Axelrod 1984, pp. 110–113.
- ↑ Axelrod 1984, p. 25.
- ↑ Axelrod 1984, p. 120.
- ↑ Axelrod 1984, pp. 47,118.
- ↑ Axelrod 1984, pp. 120+.
- ↑ Axelrod 1984, pp. 48–53.
- ↑ Axelrod 1984, p. 13.
- ↑ Axelrod 1984, pp. 18, 174.
- ↑ Axelrod 1984, p. 174.
- ↑ Axelrod 1984, p. 150.
- ↑ Axelrod 1984, pp. 63–68, 99
- ↑ Axelrod 1984, pp. 28.
- ↑ Axelrod 1984, pp. 3.
- ↑ Axelrod 1997.
- ↑ Boyd 2006; Bowles 2006.
- ↑ Norenzayan & Shariff 2008.
- ↑ Nowak 2006.
- ↑ Axelrod & Dion 1988.
- ↑ Nowak, Page & Sigmund 2000; Sigmund, Fehr & Nowak 2002.
- ↑ Camerer & Fehr 2006.
- ↑ Roughgarden, Oishi & Akcay 2006.
- ↑ Riolo, Cohen & Axelrod 2001.
- ↑ Axelrod (1984, pp. 136–138) has some interesting comments on the need to suppress universal cooperators. See also a similar theme in Piers Anthony's novel Macroscope.
- ↑ Nowak & Sigmund 1992; see also Milinski 1993.
- ↑ Nowak 2006;
- ↑ Here group selection is not a form of evolution, which is problematical (see Dawkins (1989), ch. 7), but a mechanism for evolving cooperation.
- ↑ Hauert & others 2007.
- ↑ Gurek, Irienbush & Rockenback 2006.
- ↑ Phelps, S., Nevarez, G. & Howes, A., 2009. The effect of group size and frequency of encounter on the evolution of cooperation. In LNCS, Volume 5778, ECAL 2009, Advances in Artificial Life: Darwin meets Von Neumann. Budapest: Springer, pp. 37–44. [1].
- ↑ Phelps, S., 2012. Emergence of social networks via direct and indirect reciprocity. Journal of Autonomous Agents and Multiagent Systems, DOI 10.1007/s10458-012-9207-8 (forthcoming). [2]
- ↑ Choi & Bowles 2007, p. 636.
- ↑ Dawkins 1989, pp. 1 and 2.
- ↑ Dawkins 1989, p. 139.
- ↑ Template:Harvs
- ↑ Kropotkin 1902. Why Kropotkin did not prevail is interesting – see Stephen Jay Gould's article "Kroptokin was no crackpot" (Gould 1997) – but beyond the scope of this article.
- ↑ Gauthier 1986.
- ↑ Kavka 1986; Template:Harvs
References[]
Template:Bots Most of these references are to the scientific literature, to establish the authority of various points in the article. A few references of lesser authority but greater accessibility are also included.
- Axelrod, Robert (1984), The Evolution of Cooperation, Basic Books, ISBN 0-465-02122-0
- Axelrod, Robert (1997), The Complexity of Cooperation: Agent-Based Models of Competition and Collaboration, Princeton University Press
- Axelrod, Robert (July 2000), "On Six Advances in Cooperation Theory", Analyse & Kritic 22: 130–151, http://www-personal.umich.edu/~axe/research/SixAdvances.pdf
- Axelrod, Robert (2006), The Evolution of Cooperation (Revised ed.), Perseus Books Group, ISBN 0-465-00564-0
- Axelrod, Robert; D'Ambrosio, Lisa (1996), Annotated Bibliography on the Evolution of Cooperation, http://www-personal.umich.edu/~axe/research/SixAdvances.pdf
- Axelrod, Robert; Dion, Douglas (9 December 1988), "The Further Evolution of Cooperation", Science 242 (4884): 1385–90, doi: , Bibcode: 1988Sci...242.1385A, http://www-personal.umich.edu/~axe/research/Axelrod%20Dion%20Further%20EC%20Science%201988.pdf
- Axelrod, Robert; Hamilton, William D. (27 March 1981), "The Evolution of Cooperation", Science 211: 1390–96, doi: , PMID 7466396, Bibcode: 1981Sci...211.1390A, http://www-personal.umich.edu/~axe/research/Axelrod%20and%20Hamilton%20EC%201981.pdf
- Binmore, Kenneth G. (1994), Game Theory and the Social Contract: Vol. 1, Playing Fair, MIT Press
- Binmore, Kenneth G. (1998a), Game Theory and the Social Contract: Vol. 2, Just Playing, MIT Press
- Binmore, Kenneth G. (1998b), Review of 'The Complexity of Cooperation', http://jasss.soc.surrey.ac.uk/1/1/review1.html
- Binmore, Kenneth G. (2004), "Reciprocity and the social contract", Politics, Philosophy & Economics 3: 5–6, doi:, http://mydocs.strands.de/MyDocs/06037/06037.pdf
- Bowler, Peter J. (1984), Evolution: The History of an Idea, Univ. of California Press, ISBN 0-520-04880-6
- Bowles, Samuel (8 December 2006), "Group Competition, Reproductive Leveling, and the Evolution of Human Altruism", Science 314: 1569–72, doi: , Bibcode: 2006Sci...314.1569B, http://www.santafe.edu/~bowles/GroupCompetition
- Bowles, Samuel; Choi, Jung-Koo; Hopfensitz, Astrid (2003), "The co-evolution of individual behaviors and social institutions", J. of Theoretical Biology 223: 135–147, doi:, http://www.santafe.edu/~jkchoi/jtb223_2.pdf
- Boyd, Robert (8 December 2006), "The Puzzle of Human Sociality", Science 314: 1555–56, doi:, http://xcelab.net/rm/wp-content/uploads/2008/10/boyd-evolution-human-cooperation-review.pdf
- Camerer, Colin F.; Fehr, Ernest (6 January 2006), "When Does 'Economic Man' Dominate Social Behavior?", Science 311: 47–52, doi: , Bibcode: 2006Sci...311...47C, http://www.hss.caltech.edu/~camerer/ScienceInteraction06.pdf
- Carnegie, Andrew (1900), The Gospel of Wealth, and Other Timely Essays
- Choi, Jung-Kyoo; Bowles, Samuel (26 October 2007), "The Coevolution of Parochial Altruism and War", Science 318: 636–40, doi: , Bibcode: 2007Sci...318..636C, http://www.sciencemag.org/cgi/reprint/318/5850/636.pdf
- Darwin, Charles ([1859] 1964), On the Origin of Species (A Facsimile of the First Edition ed.), Harvard Univ. Press
- Dawkins, Richard ([1976] 1989), The Selfish Gene (2nd ed.), Oxford Univ. Press, ISBN 0-19-286092-5
- Gauthier, David P. (1986), Morals by agreement, Oxford Univ. Press
- Gauthier, David P., ed. (1970), Morality and Rational Self-Interest, Prentice-Hall
- Gould, Stephen Jay (June 1997), "Kropotkin was no crackpot", Natural History 106: 12–21, http://www.marxists.org/subject/science/essays/kropotkin.htm
- Gürek, Özgür; Irienbush, Bernd; Rockenbach, Bettina (7 April 2006), "The Competitive Advantage of Sanctioning Institutions", Science 312: 108–11, doi: , Bibcode: 2006Sci...312..108G, Archived from the original on 19 July 2011, http://web.archive.org/web/20110719060124/http://www.lrz.de/~u516262/webserver/webdata/guererketal2006_sanctioningmechanisms_science.pdf
- Hamilton, William D. (1963), "The Evolution of Altruistic Behavior", American Naturalist 97: 354–56, doi:, http://westgroup.biology.ed.ac.uk/teach/social/Hamilton_63.pdf
- Hamilton, William D. (1964), "The Genetical Evolution of Social Behavior", J. of Theoretical Biology 7: 1–16, 17–52, doi: , PMID 5875341, Archived from the original on 2009-12-29, http://web.archive.org/web/20091229084043/http://lis.epfl.ch/~markus/References/Hamilton64a.pdf
- Hardin, Garrett (13 December 1968), "The Tragedy of the Commons", Science 162 (3859): 1243–1248, doi: , PMID 5699198, Bibcode: 1968Sci...162.1243H, Archived from the original on 2005-03-28, http://web.archive.org/web/20050328172723/http://www.ldeo.columbia.edu/edu/dees/V1003/lectures/population/Tragedy%20of%20the%20Commons.pdf
- Hauert, Christoph; Traulsen, Arne; Brandt, Hannelore; Nowak, Martin A.; Sigmund, Karl (29 June 2007), "Via Freedom to Coercion: The Emergence of Costly Punishment", Science 316: 1905–07, doi: , Bibcode: 2007Sci...316.1905H, http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/science07.pdf
- Henrich, Joseph (7 April 2006), "Cooperation, Punishment, and the Evolution of Human Institutions", Science 312: 60–61, doi: , Archived from the original on 29 June 2011, http://web.archive.org/web/20110629014507/http://www.sfu.ca/~wchane/sa304articles/Henrich.pdf
- Henrich, Joseph; and 13 others (23 June 2007), "Costly Punishment Across Human Societies", Science 312: 1767–70, doi: , PMID 16794075, Bibcode: 2006Sci...312.1767H, http://www2.psych.ubc.ca/~henrich/Website/Papers/Science/Henrichetal2006Science.pdf
- Hobbes, Thomas ([1651] 1958), Leviathan, Bobbs-Merrill [and others]
- Hofstadter, Douglas R. (May 1983), "Metamagical Themas: Computer Tournaments of the Prisoner's Dilemma Suggest How Cooperation Evolves", Scientific American 248: 16–26
- Hofstadter, Douglas R. (1985), "The Prisoner's Dilemma Computer Tournaments and the Evolution of Cooperation", Metamagical Themas: Questing for the Essence of Mind and Pattern, Basic Books, pp. 715–730, ISBN 0-465-04540-5
- Kavka, Gregory S. (1986), Hobbesian moral and political theory, Princeton Univ. Press
- Kropotkin, Petr (1902), Mutual Aid: A Factor in Evolution
- Maynard Smith, John (1976), "Evolution and the Theory of Games", American Scientist 61: 41–45
- Maynard Smith, John (September 1978), "The Evolution of Behavior", Scientific American 239: 176–92
- Maynard Smith, John (1982), Evolution and the Theory of Games, Cambridge Univ. Press
- Melville, Herman ([1851] 1977), Moby-Dick, Bobbs-Merrill [and others]
- Milinski, Manfred (1 July 1993), "News and Views: Cooperation Wins and Stays", Nature 364 (6432): 12–13, doi: , Bibcode: 1993Natur.364...12M
- Morse, Phillip M.; Kimball, George E. (1951), Methods of Operations Research
- Morse, Phillip M.; Kimball, George E. (1956), "How to Hunt a Submarine", in Newman, James R., The World of Mathematics, 4, Simon and Schuster, pp. 2160–79
- Norenzayan, Ara; Shariff, Azim F. (3 October 2008), "The Origin and Evolution of Religious Prosociality", Science 322: 58–62, doi: , Bibcode: 2008Sci...322...58N, http://rifters.com/real/articles/Science_TheOriginandEvolutionofReligiousProsociality.pdf
- Nowak, Martin A (8 December 2006), "Five Rules for the Evolution of Cooperation", Science 314: 1560–63, doi: , PMID 17158317, PMC: 3279745, Bibcode: 2006Sci...314.1560N, http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/Nowak_Science06.pdf
- Nowak, Martin A; Page, Karen M.; Sigmund, Karl (8 September 2000), "Fairness Versus Reason in the Ultimatum Game", Science 289 (5485): 1773–75, doi: , PMID 10976075, Bibcode: 2000Sci...289.1773N, http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/Science00.pdf
- Nowak, Martin A.; Sigmund, Karl (16 January 1992), "Tit For Tat in Heterogenous Populations", Nature 355 (6016): 250–253, doi: , Bibcode: 1985Natur.315..250T, http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/Nature92b.pdf
- Nowak, Martin A.; Sigmund, Karl (1 July 1993), "A strategy of win-stay, lose-shift that outperforms tit for tat in Prisoner's Dilemma", Nature 364 (6432): 56–58, doi: , Bibcode: 1993Natur.364...56N, http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/Nature93.pdf
- Poundstone, William (1992), Prisoner's Dilemma: John von Neumann, Game Theory and the Puzzle of the Bomb, Anchor Books, ISBN 0-385-41580-X
- de Quervain, D. J.-F.; Fischbacker, Urs; Treyer, Valerie; Schellhammer, Melanie; Schnyder, Ulrich; Buck, Alfred; Fehr, Ernst (24 August 2004), "The Neural Basis of Altruistic Punishment", Science 305: 1254, doi: , Bibcode: 2004Sci...305.1254D, http://www.sciencemag.org/cgi/reprint/305/5688/1254.pdf
- Rand, Ayn (1961), The Virtue of Selfishness: A New Concept of Egoism, The New American Library
- Rapoport, Anatol; Chammah, Albert M. (1965), Prisoner's Dilemma, Univ. of Michigan Press
- Riolo, Rick L.; Cohen, Michael D.; Axelrod, Robert (23 November 2001), "Evolution of cooperation without reciprocity", Nature 414 (6862): 441–43, doi: , Bibcode: 2001Natur.414..441R, http://www-personal.umich.edu/~axe/research/EC_wo_reciprocity.pdf
- Roughgarden, Joan; Oishi, Meeko; Akcay, Erol (17 February 2006), "Reproductive Social Behavior: Cooperative Games to Replace Sexual Selection", Science 311: 965–69, doi: , PMID 16484485, Bibcode: 2006Sci...311..965R, Archived from the original on 21 July 2011, http://web.archive.org/web/20110721215055/http://www.ecfs.org/projects/pchurch/AT%20BIOLOGY/PAPERS/ReplacingDarwinsSexualSelection.pdf
- Rousseau, Jean Jacques ([1762] 1950), The Social Contract, E. P. Dutton & Co. [and others]
- Sanfey, Alan G. (26 October 2007), "Social Decision-Making: Insights from Game Theory and Neuroscience", Science 318: 598–602, doi: , PMID 17962552, Bibcode: 2007Sci...318..598S
- Sigmund, Karl; Fehr, Ernest; Nowak, Martin A. (January 2002), "The Economics of Fair Play", Scientific American: 82–87, http://www.ped.fas.harvard.edu/people/faculty/publications_nowak/SciAm02.pdf
- Trivers, Robert L. (March 1971), "The Evolution of Reciprocal Altruism", Quarterly Review of Biology 46: 35–57, doi:, http://lis.epfl.ch/~markus/References/Trivers71.pdf
- Vogel, Gretchen (20 February 2004), "News Focus: The Evolution of the Golden Rule", Science 303 (5661): 1128–31, doi:, http://www.sciencemag.org/cgi/reprint/303/5661/1128.pdf
- Von Neumann, John; Morgenstern, Oskar (1944), Theory of Games and Economic Behavior, Princeton Univ. Press
- Wade, Nicholas (20 March 2007), "Scientist Finds the Beginnings of Morality in Primitive Behavior", New York Times: D3, http://www.nytimes.com/2007/09/18/science/18mora.html?pagewanted=1
- Williams, John D. (1954), The Compleat Strategyst, RAND Corp.
- Williams, John D. (1966), The Compleat Strategyst: being a primer on the theory of games of strategy (2nd ed.), McGraw-Hill Book Co.
- Wilson, Edward O. ([1975] 2000), Sociobiology: The New Synthesis (25th Anniversary ed.), Harvard Univ. Press, ISBN 0-674-00235-0
Template:Use dmy dates
This page uses Creative Commons Licensed content from Wikipedia (view authors). |