Current Posts


Mapping to Reality.

As my contribution to this is over two years in waiting, I'm going to step back before zoning in on particulars.

For starters, both the Prisoner's Dilemma and Platonia Dilemma are but two idealized, zero-sum games in all of game theory. They are idealized in that they assume the players are all completely rational (or Super-Rational) with substantiated beliefs about what the other players might do, and they are zero-sum in that the players' interests are diametrically opposed. Their one-shot versions favor defection as the Nash Equilibrium or winning strategy, while their iterated versions favor Tit-for-Tat. Okay, moving on...

Keep it in the Game
Mapping these games to morality is, to put it bluntly, just plain incorrect. As games, the decisions of each player should be confined to the rules of the game; you don't give up the ball in football or give up your queen in chess because it's the "right" thing to do. No, the goal of each game is simply to win, and you bound your decisions by the rules of the game. Morality has nothing to do with it; it's a game! The only moral defection is in breaking the game's rules to achieve victory, or in making decisions based upon external influences (such as throwing the fight because you've been bribed to lose). If a player wants to win the game for reasons beyond the scope of the game's rules, he is not in fact playing the same game. In other words, if you don't want to win, you aren't even playing.

If you want the PD to represent morality, I'd suggest you substitute "Good" and "Bad" with areas of grey for the dollar amounts in the matrix. There is then no moral high-ground to justify a player's less-than-optimal play; the implementation details of what is Good and Bad are safely compartmentalized.

Real Life is not Zero-Sum
In most real-life interactions, each player benefits to some degree; it's just a matter of how much. You might haggle for a better price for your new bazooka, but in the end both you and the arms dealer get what you want. It is easier to see how morality is mapped onto non-zero-sum games with less confusion than zero-sum games, because it merely acts as a scalar modifier to the relative payoffs of each player. Since everyone still wins, it's easier to justify not winning as much--or in this case, paying just a bit more than you'd like for that bazooka. It is mostly in sports and intellectual quibbles where we see the zero-sum situation. I'd even go so far as to say it doesn't even exist in war, as war often has no clear winner or loser.

Real Life is not Super-Rational
The Nash equilibrium for the one-shot PD is for every player to defect. For players to reach this equilibrium, they need play based on the assumption that everyone else will make the best play. However, each player knows at a gut-level that each player's best play may not be rational (in other words, based on some moral high-ground as mentioned above), and thus adjusts his play accordingly. This has lead to the development of behavioral game theory, which concentrates on interactions between real people, rather than Super-Rational people. The value of the Nash Equilibrium as an idealized outcome is simply to bound our expectations.

There is no One-Shot Game
I truly don't believe there is much to contest in what I've said thus far; I think it is in line with what both Aziz and Keith have stated. In fact, I apologize if it merely sounds like I'm re-hashing what's already been covered. Nonetheless, the main point I wanted to make is the following: I don't think the one-shot game exists in real life.

"Oh, but it does. I interact with strangers all the time." Yes, we all interact with stangers, but do you approach each interaction fresh and without predispositions? No, and in fact this is the premise of my NPC Theory (which could probably use some revisions). Aziz says that Stranger's conglomerate strategy is Random (personal correspondence), but I disagree. I believe that we all have pretty good ideas how interactions with Stranger will go. Think of it as a class heirarchy, where, say, Stranger, Friend, Family, and so on all inherit from the common Person class (or as I put it in my NPC theory, each occupies a different role space).

John is a new Person we encounter in real life. He could be considered an instance of Stranger, and thus has some default properties based not only upon our entire lifetime of experience with Stranger, but also our observation of Stranger's interactions with other Persons. We might know nothing about John in particular, but we nonetheless have a framework within which to make decisions. Thus, while my interaction with John might be a one-shot case, my interaction with Stranger is not; my play is influenced by Stranger's past behavior, and John's play is influenced by his past experiences with other Strangers.

Granted, the properties of Stranger will vary depending on time and place. A Stranger in feudal Japan has different properties than a Stranger in modern New York. However, we know this, and unless we have been suddenly whisked away to an alien place or have utterly un-plastic minds, we have a good idea of how to interact with Stranger. When we don't, well, that's what we call culture shock.

In summary, I don't see one-shot, zero-sum, payoff matrices as relevant to morality, but I'd definitely be interested to see how game theory as a whole could be.

permalink | posted by Phlegm | 8.12.2006 | 0 comments