Extortioners are only temporarily successful

A game theory experiment demonstrates that people only let themselves be extorted to a limited extent, thus contradicting models indicating the success of extortionate social conduct

May 29, 2014

Nice people are not always good people. Those who have always felt this intuitively can feel vindicated by the results of a recently published theoretical study: For the so-called Prisoner’s Dilemma there are shrewd strategies a player can employ to force an opponent to cooperate before systematically exploiting this same cooperativeness in the end. But how successful is such exploitative behaviour in real life? Scientists from the Max Planck Institute for Evolutionary Biology in Plön have now developed an experiment to study this question in detail. It turns out that extortioners are only successful in the short term: many people do not allow themselves to be permanently exploited.

original
Prisoner's dilemma in the laboratory: egoists are particularly successful when they meet others who are willing to cooperate.

The Prisoner’s Dilemma is a popular experiment among game theoreticians, which they use to study human social conduct. It involves two players, each of whom has no knowledge about the other, deciding either to cooperate or to behave selfishly. If both act selfishly, neither gets much out of it. If they both cooperate, they benefit together. However, the biggest profit is made by the one who decides to be selfish when the other is willing to cooperate.

If the players only come up against each other once, the selfish approach is the more successful. But given that we normally come across people more than once in life, it’s more realistic to have each pair play several rounds against each other. In that case, cooperation yields the best result. Of all the game strategies tested to date, the “win stay – lose shift” strategy was long considered the most successful. This is a cooperative strategy in which the players repeat their successful behaviour from the previous round, only switching if the previous behaviour was not successful. For the past 20 years, the maxim for game theoreticians has therefore been: an unfair player cannot win; cooperation has the advantage over selfishness. In real life, however, things often look different.

American scientists recently discovered an even more successful, though morally objectionable strategy. In it, a player does cooperate with their opponent, but only so that they can seek to benefit themselves at the right moment. “Such a player cooperates on occasion, quite deliberately, thereby misleading their opponent to cooperate with greater frequency. The opponent is literally forced to cooperate, only for the other player to strike when the time is right and seek their own advantage,” explained Christian Hilbe, who is now researching at Harvard.

Initially only a result of theoretical computations, the Max Planck researchers went on to put the extortionate strategies to the test. The Plön-based scientists devised an experiment in which subjects played 60 rounds of the Prisoner’s Dilemma against an unknown opponent – unaware that the opponent was a computer program pursuing various game strategies.

The scientists’ findings showed that a player who exploits the other’s willingness to cooperate is, in actual fact, much more successful. Hilbe explains: “A player like this lures the other one in and then takes the payoff,” said Hilbe. What makes it so deceitful: the opponent has no option but to enter into it and cooperate if they are to get any benefit out of it at all. “It’s downright extortion: You’re forced to cooperate more and more if you want to have any chance at all of increasing your own profit,” explains Manfred Milinski from the Max Planck Institute for Evolutionary Biology. “Many of the test persons who took part in our experiments came out of the experience extremely frustrated and developed real feelings of hatred towards their unknown opponent. Of course what they didn’t know was that their opponent was a computer.”

The extortioner’s approach is calculated to include just enough cooperation to prevent the opponent being completely non-compliant. Sooner or later though most people notice that the other player is acting in bad faith and stop cooperating. They forgo a small profit of their own, thereby depriving the extortioner of a bigger profit. The extortioner is punished, which, in reality, would be likely to make them give in. But not in the mindless computer strategy.

Extortioners damage themselves in the long run: calculated across all rounds of the experiment, the profit achieved through extortionate strategies is relatively small. Generosity is more successful. If the virtual player did not attempt to cheat its human opponent, allowing them some success instead, the end result was always more profitable for the computer, too. “We actually appear to have the right strategies up our sleeve for spotting exploitative behaviour after a while and reacting to it,” said Milinski. “We lose a bit to exploiters, but then we punish them as a means of disciplining them. We do need to be on our guard, even with nice people.”

HR

Go to Editor View