INDEX.
Welcome to Super-rational, a group discussion in the spirit of Enlightenment values and in celebration of the faculty of Reason. This blog is structured as an extended conversation, where each post is formulated as a response to the previous ones. This post, the INDEX, lists each post in chronological order so that the integrity of this conversation is maintained for a new reader.- The Platonia Dilemma by Aziz
- Enlightened self-interest by araven
- I choose to defect by Aziz
- I choose to cooperate by Keith
- Right and wrong by Aziz
- Payoff and right by Keith
- Mapping to reality by Phlegm
- I choose to cooperate - because I can by Matoko
- Super? Super-rational by sourat
In addition, the right sidebar features a del.icio.us-powered list of links that pertain to Reason and the Enlightenment. Anyone may submit a link to this feed by registering a del.icio.us account and then tagging the link with "enlightenment" and "for:azizhp".
I choose to cooperate-- because i can.
Hofstadter was disappointed in his results--he says, "Cooperate was the answer I was hoping to recieve from everyone.". In the problem definition for the single shot Platonia Dilemma, he deliberately selected individuals who he believed would be superrational. Not a random sample.The answer closest to what i would have done was Scott Buresh's. Hofstadter says,
...he treated his own brain as a simulation of other people's brains and ran the simulation enough to get a sense of what a typical person would do.The results of Buresh's simulation (or, einsteinian thought experiment) parallel Hofstadter's sampling results. In Buresh's simulation, he chose to cooperate roughly one third of the time. Two thirds of the chosen players chose to defect, including Buresh. This aligns with my own hypothesis, that reason is non-deterministic.
Phlegm is right, it not possible to think of the single shot PD as a black box, bereft of I/O. Aziz says, he can choose to cooperate based on the mores and social values he has learned (input). Brenner (from Hofstadter's original matrix) cooperated and says he feared being labelled a defector in a published journal (output).
I do not, however, believe that this is a "moral" choice. Morals can contribute to the iterated PD. (Phlegm, you should read Maynard-Smith for this.) But for the single shot, it is more about survival versus altruism.
Now, at this point i could launch into a long-winded discussion of the evolution of cooperation, but Aziz has strictly restricted me to the single shot.
So in closing, i say, i choose to cooperate--because I can. ;)
Mapping to Reality.
As my contribution to this is over two years in waiting, I'm going to step back before zoning in on particulars.For starters, both the Prisoner's Dilemma and Platonia Dilemma are but two idealized, zero-sum games in all of game theory. They are idealized in that they assume the players are all completely rational (or Super-Rational) with substantiated beliefs about what the other players might do, and they are zero-sum in that the players' interests are diametrically opposed. Their one-shot versions favor defection as the Nash Equilibrium or winning strategy, while their iterated versions favor Tit-for-Tat. Okay, moving on...
Keep it in the Game
Mapping these games to morality is, to put it bluntly, just plain incorrect. As games, the decisions of each player should be confined to the rules of the game; you don't give up the ball in football or give up your queen in chess because it's the "right" thing to do. No, the goal of each game is simply to win, and you bound your decisions by the rules of the game. Morality has nothing to do with it; it's a game! The only moral defection is in breaking the game's rules to achieve victory, or in making decisions based upon external influences (such as throwing the fight because you've been bribed to lose). If a player wants to win the game for reasons beyond the scope of the game's rules, he is not in fact playing the same game. In other words, if you don't want to win, you aren't even playing.
If you want the PD to represent morality, I'd suggest you substitute "Good" and "Bad" with areas of grey for the dollar amounts in the matrix. There is then no moral high-ground to justify a player's less-than-optimal play; the implementation details of what is Good and Bad are safely compartmentalized.
Real Life is not Zero-Sum
In most real-life interactions, each player benefits to some degree; it's just a matter of how much. You might haggle for a better price for your new bazooka, but in the end both you and the arms dealer get what you want. It is easier to see how morality is mapped onto non-zero-sum games with less confusion than zero-sum games, because it merely acts as a scalar modifier to the relative payoffs of each player. Since everyone still wins, it's easier to justify not winning as much--or in this case, paying just a bit more than you'd like for that bazooka. It is mostly in sports and intellectual quibbles where we see the zero-sum situation. I'd even go so far as to say it doesn't even exist in war, as war often has no clear winner or loser.
Real Life is not Super-Rational
The Nash equilibrium for the one-shot PD is for every player to defect. For players to reach this equilibrium, they need play based on the assumption that everyone else will make the best play. However, each player knows at a gut-level that each player's best play may not be rational (in other words, based on some moral high-ground as mentioned above), and thus adjusts his play accordingly. This has lead to the development of behavioral game theory, which concentrates on interactions between real people, rather than Super-Rational people. The value of the Nash Equilibrium as an idealized outcome is simply to bound our expectations.
There is no One-Shot Game
I truly don't believe there is much to contest in what I've said thus far; I think it is in line with what both Aziz and Keith have stated. In fact, I apologize if it merely sounds like I'm re-hashing what's already been covered. Nonetheless, the main point I wanted to make is the following: I don't think the one-shot game exists in real life.
"Oh, but it does. I interact with strangers all the time." Yes, we all interact with stangers, but do you approach each interaction fresh and without predispositions? No, and in fact this is the premise of my NPC Theory (which could probably use some revisions). Aziz says that Stranger's conglomerate strategy is Random (personal correspondence), but I disagree. I believe that we all have pretty good ideas how interactions with Stranger will go. Think of it as a class heirarchy, where, say, Stranger, Friend, Family, and so on all inherit from the common Person class (or as I put it in my NPC theory, each occupies a different role space).
John is a new Person we encounter in real life. He could be considered an instance of Stranger, and thus has some default properties based not only upon our entire lifetime of experience with Stranger, but also our observation of Stranger's interactions with other Persons. We might know nothing about John in particular, but we nonetheless have a framework within which to make decisions. Thus, while my interaction with John might be a one-shot case, my interaction with Stranger is not; my play is influenced by Stranger's past behavior, and John's play is influenced by his past experiences with other Strangers.
Granted, the properties of Stranger will vary depending on time and place. A Stranger in feudal Japan has different properties than a Stranger in modern New York. However, we know this, and unless we have been suddenly whisked away to an alien place or have utterly un-plastic minds, we have a good idea of how to interact with Stranger. When we don't, well, that's what we call culture shock.
In summary, I don't see one-shot, zero-sum, payoff matrices as relevant to morality, but I'd definitely be interested to see how game theory as a whole could be.
Payoff and Right.
Aziz writes:I may have read Keith wrongly, but it seems that he explains Hoftstadter's colleagues' defection as a simple function of humans being flawed. Namely, the "right" thing to do was to cooperate, and they were "wrong" to defect. This view is of course founded on the a-priori assumption that cooperation still is the "right" course of action, and any deviation from that course is the result of a flawed analysis."
I did couch a potential explanation their defection in those terms, but not because I was a priori associating cooperation with ‘right’ and defection with ‘wrong’. Hofstadter says the point of the Platonia Dilemma game is to maximize one’s individual return. According to the Platonia pay-off matrix, if everyone cooperates, each will receive the most money ($57). That maximal monetary return is the point of favoring cooperation in this analysis, regardless of whether we consider cooperation the ‘right thing to do.’ (I cooperated because I thought it was the ‘right thing to do’ but Aziz appears to conflate my motives with my analysis of Hoftadter’s colleagues.)
As an individual entering into the Platonia Dilemma, I must seek to maximize my return. I recognize that if I cooperate AND everyone else cooperates, I’ll get the maximal return possible. However, I have to judge the likelihood that everyone else will cooperate, because my pay-off is dependent on the choices of others. Hofstadter thought that each actor would recognize that all the other actors are rational decision-makers and cooperate based on trust that others would as well. This meta-analysis constitutes ‘superrationality’
What I was pointing out in my post is that we must consider our own limited rationality as an assumption in our reasoning. When we realize that it’s likely that not everyone will see things the same way, that someone will distrust others and therefore defect in their own self-interest, we end up with the results that Hofstadter got’lots of defections. These individuals were acting in their own self-interest based on the assumption that not everyone would cooperate. That is, they were rejecting the conjunctive assumption ‘AND everyone else cooperates.’ This is not a flawed analysis, it's a perfectly valid one. I’ll grant that my post may have made it sound flawed because I was talking about the "mental defect" of humans as limited creatures. However, that’s just one ground for believing the assumption that ‘someone’s going to defect’ and therefore rationally choosing to defect oneself in order to maximize return. There could be many other grounds for rejecting the assumption that others will cooperate. Since these grounds exist and will be exercised by humans in real-life situations, we do best to avoid singe-shot Prisoner’s Dilemma AND Platonia Dilemma situations.
In any event, I don’t think Aziz and I are in fundamental conflict on any of that. However, I still am confused about Aziz’s railing against identifying the Prisoner’s Dilemma and morality:
This is why I stated that morality and the PD are incompatible concepts. Morality is a set of values, the PD is a game based on a payoff matrix. The payoff matrix is not a moral matrix, it is usually rigidly defined according to some quantitative metric: a set number of dollars, for example (in the Platonia Dilemma). Of course the qualitative moral implications are related to the quantitative payoff, but that relationship is often quite complex.
Who’s trying to say that the Prisoner’s Dilemma == morality’ All I’ve heard in the literature is the very plausible argument from Axelrod et. al. that situations arising in natural selection that can be modeled by the Iterated Prisoner’s Dilemma give rise to seemingly altruistic behavior that, in humans, could explain in naturalistic terms the origin and evolutionary success of morality.
I like Keith's argument that we need to find a way to rephrase social problems so as to avoid the Prisoner's Dilemma entirely, but I think this is largely an impossible task, aside from ruinously forceful intrusion of the government into personal liberties. And even in that scenario, what the government decides is "right" is not neccessarily so. You can never escape the Henkins paradox when dealing with issues sch as Right and Wrong in dealing with social policy.
While I agree with the last sentence at least to the extent that we’ll rarely have unanimity on the right course of action, I disagree with Aziz’s pessimism that any government interventions must be ruinous to personal liberty and with the implication that we should do nothing even if the preponderance of evidence indicates we should.
In fact, the Prisoner’s Dilemma shows us how to take appropriate action in a minimally interventionalist way. The EPA’s SO2 emissions trading scheme is a great example. Here’s a pollutant that causes acid rain’a social negative only partially felt by the actor (energy company) causing the problem by burning coal to generate electricity for sale (the pay-off that’s all theirs, creating an incentive to pump more SO2 into the atmosphere). The EPA could’ve banned SO2, but that would’ve (in the short term, for certain companies and localities) been economically disastrous. So they instituted caps and started a emissions credit trading market that changed the pay-off matrix such that emissions would be reduced without micromanaging regulation for each utility.
Of course, we’re all used to and appreciative of much more intrusive regulation on a personal level already. Enforcing property rights is one way the government right now avoids situations where the pay-off matrix in a single-shot interaction favors ‘defection’ as in stealing and plundering. We’ve accepted the restriction on personal liberty to not take another’s property and it’s served us well to have government enforce such regulations strictly.
The bottom line is that the intervention must be proportional to the harm. The real problem with government intervention is in the disagreements in assessing harm and therefore the level of regulation required to avoid it.
Right and wrong.
As Keith noted, the single-shot PD's self-interest matrix is biased towards defection. Note that tit-for-tat is only an optimal strategy for iterated PD, and so it is not really applicable to the single-shot case, since the interaction may never progress to "tat."I may have read Keith wrongly, but it seems that he explains Hoftstadter's colleagues' defection as a simple function of humans being flawed. Namely, the "right" thing to do was to cooperate, and they were "wrong" to defect. This view is of course founded on the a-priori assumption that cooperation still is the "right" course of action, and any deviation from that course is the result of a flawed analysis. However, as Hofstadter himself points out, there is an analogy to the Henkins statement "this sentence is true." It is equally defensible on the grounds of logic alone to believe the sentence to be "right" or "wrong."
This is why I stated that morality and the PD are incompatible concepts. Morality is a set of values, the PD is a game based on a payoff matrix. The payoff matrix is not a moral matrix, it is usually rigidly defined according to some quantitative metric: a set number of dollars, for example (in the Platonia Dilemma). Of course the qualitative moral implications are related to the quantitative payoff, but that relationship is often quite complex.
For example, to use Kim's Wal-Mart example. You could make a moral argument that supporting that store helps fund social causes that negatively affect society. You could make an equally compelling moral argument that the money you save, partly used to fund social causes that benefit society, have more effect. My yearly donation of $15 to the Sierra Club has a real impact in terms of lobbyist salaries and influencing policy, for example, whereas since there are always people willing to defect and shop at Wal-Mart, the net effect of not shopping there is almost negligible (and Kim ends up having less money in her pocket anyway).
Does that mean that you should shop at Wal-Mart even if you believe in the social causes at risk by that store's methods of doing business? Not at all - see graphic at left (from article in Business Week, free registration required).
What is misleading in the Costco example however is the selective metrics used to make the argument. Yes, profits per employee and per square foot are higher than Wal-Mart, but what if you compare stock prices? COST | WMT
By which metrics do we determine the winner of the payoff matrix, then? Clearly, the "winner" is subjective. Wal-Mart "defects" by paying employees less, Costco cooperates by paying them more, and yet both claim victory. One consumer shops at Wal-Mart and uses a portion of the savings to donate to liberal organizations, another shops at Costco and pays more. Both will claim moral high ground. You could take the argument to an extreme and ask, why do we spend even one dollar on the space program or the NEA when we could be ending world hunger with the same monies?
Overall, these gray areas are inescapable. This is the real reason that Hofstadter's colleagues - including Axelrod himself - were perfectly justified in choosing to defect. Not because they were wrong, but because they were also right. Right and Wrong simply don't map cleanly onto the payoff matrix space.
I like Keith's argument that we need to find a way to rephrase social problems so as to avoid the Prisoner's Dilemma entirely, but I think this is largely an impossible task, aside from ruinously forceful intrusion of the government into personal liberties. And even in that scenario, what the government decides is "right" is not neccessarily so. You can never escape the Henkins paradox when dealing with issues sch as Right and Wrong in dealing with social policy.
It is tempting to try and relate these kinds of issues to simple one-dimensional morality issues like "do not steal" and "share" but in real-life, it is precisely the more complex, gray area situations that we most need to solve which are not so easily characterized. In that sense, Keith's wish is granted, since the PD is simply inapplicable.
I choose to cooperate.
I do so not because I think it'll necessarily bring me the greatest benefit in any given instance but because it's the right thing to do and I was taught to do the right thing.I must grant the possibility (to Kim & Nigel) that if one were to psychoanalyze me, I may only be "doing the right thing" even if it doesn't appear to be in my self-interest by subconsciously performing the hedonistic calculus that puts "feeling superior" for having done the right thing on the positive balance side to compensate for whatever exterior loss I may be facing, meaning that I'm still acting purely in my own self-interest, just with my own, personal, internal, subjective pay-off matrix. It's quite hard to say.
It's also possible that I'm choosing to cooperate not based on any extensively reasoned basis at all. As Daniel Dennett argues compellingly in Darwin's Dangerous Idea, Herbert Simon's method of "satisficing" is the only practical way for a limited being such as a human to come to a decision in the first place. Reasoning through all the possibilities is, for most purposes, impossible. There's simply not enough time to calculate all the effects of a particular line of conduct. Humans have therefore developed a lot of shortcuts and "conversation stoppers" to end discussion and choose a path. As Aziz says, (human) reason is not logic. Suboptimal as it may be, our shortcut, satisficed decisions are "close enough" and better than the paralysis by analysis that gives us no decision at all. (Sabertooth tiger approaching. Hmmm. If I were to climb a tree I could escape being eaten which would be good? Which tree? Well, that one's too small. That one's too slippery. If only I had a large club with me. Perhaps I could fashion one out of some nearby debris. What if we invented a sort of poking device with a long stick and a sharpened rock? Eaten! Should’ve just RUN!)
Morality, a set of prescriptive rules handed down from generation to generation, is one of those satisficing intellectual shortcuts, a conversation stopper that keeps us making fairly successful decisions without laboring over them each time. It's what gives me that sense of "right" that impels me to cooperate. However, if we accept the Darwinian framework, that I was taught that cooperating was "the right thing" is in need of some explanation. For survival purposes, one would expect individuals to always act only in their own self-interest, which would on the surface imply non-cooperation. However, as Robert Axelrod has so elegantly shown using the iterated prisoner's dilemma (IPD), cooperation is an evolutionarily stable strategy that explains the existence of "altruism" on the very basis of self-interest. It is, as Kim says, the long view of "enlightened" self interest.
In the single-shot PD, however, the opposite is true and self-interest favors defection. In the everyday course of life, however, we more often encounter variations of the IPD, so we should be more inclined to act in accord with the "tit-for-tat" strategy of cooperate but punish defections than with the purely selfish calculus of the one-shot pay-off matrix. One of the rules of that tit-for-tat strategy is to cooperate first, only defecting to punish past defections. Perhaps that's another reason why I choose to cooperate. While I may consciously know that I would be better off financially defecting in a one-shot PD situation, do I really know it's a one-shot PD? Even if I'm guaranteed never to meet the same actors in the same situation again, who's to say that I won't face-off against someone watching this match in the future? People have long memories and don't often forget cheating. In an IPD world, grudges have useful survival benefit.
The reason Hofstadter's colleagues disappointed him in his Platonia Dilemma experiment is that, unfortunately, we don't live in Plato's perfect world of forms. People are not perfect. We are not (all) superrational. We make decisions by satisficing. As Aziz says, it's entirely possible for even rational thinkers to be wrong. This mental defect (side effect of being limited beings, really) is a basic assumption that must be included in any discussion regarding human action. When we do include this assumption and recognize not everyone will reason the same way we do, we end up with the "reverberant doubt" that causes defection in the Platonia Dilemma: Someone's going to defect. Someone's going to cheat! Fully understanding the implications of that assumption reveals the foolishness of both the trickle-down economics that Kim rails against and the expectation that any other group will consistently "do the right thing" when it's not in their self-interest (economic or otherwise) to do so. (Kim & I being the superrational exceptions to the rule. Or more likely, just the exceptions to the rule based on our personalities or the good job our parents did teaching us right from wrong or some other factor nothing to do with Hofstadter's notion of superrationality.) But regardless of how many "good apples" cooperate, someone's going to cheat.
Given that, the answer to the conflict between morality and self-interest is not to fight human nature but to change the rules of the game. Aziz is correct when he says the demands of the PD and morality are often in conflict.* In particular, they are in conflict in single-shot PD scenario. So, we need to avoid single-shot PDs whenever possible. We must avoid "tragedy of the commons" situations (whose pay-off matrix is identical to the PD and results in the same kind of defection) by imposing external regulations or costs that change the matrix to favor cooperation. This is how we protect the "do gooders" from the less scrupulous "bad apples" who act only in their short-term self-interest. We invent strategies that maximize cooperation for non-zero-sum benefit. We carefully constrain zero-sum interactions. We find the defectors and punish them.
Of course, that leaves open the question of what we should do when we find ourselves in a single-shot PD situation. How do we act when they are in direct conflict and we don't have recourse to external structural changes to change the pay-off? How do we choose between "doing the right thing" and acting in our own self-interest? Between morality and the PD pay-off matrix?
What do we do? We satisfice. Aziz defects, knowing the cruel logic of the PD. Kim & I cooperate, assuming either the hedonic benefit of the high moral ground or slavishly following our internalized morality or allowing for the more pragmatic possibility there will be another match to play and what now appears a single-round PD is in fact an IPD in a larger context.
Given the particular situation and the particular pay-off, I expect that any of us may flip-flop our decision for one (satisficed and limited) reason or another.
*Postscript: While I agree that in the single-shot PD, the PD pay-off and demands of morality are in conflict, that's not the case in the IPD. I'm not sure what Aziz is referring to when he finds "attempts to unify morality and PD-strategy to be deeply flawed." The analyses I've read from Axelrod (cited by Aziz), reiterated and expanded on by Robert Wright (The Moral Animal, Nonzero) are explanations of the appearance of cooperation and morality based on the survival advantage granted by such win-win behavior in the commonly occurring IPD scenario. I've not heard of anything that equates defection in the single-shot PD with moral choice of "doing the right thing", which is what Aziz seems to imply when he calls them orthogonal decision-making strategies
I also would quibble with Aziz's assertion that the demands of morality are subjective. While I can certainly agree that the body of rules handed down from generation to generation are subjective in their purported moral content, there is a sense in which, to the extent they confer survival benefit, support the growth of life, and open up potential, they have "objective" value. I believe that the naturalization of ethics through the IPD points a way to an non-subjective morality, but that's a matter for a different blog.
I choose to defect.
Kim writes below that "Unquestionably in these one-shot Prisoner's Dilemma situations, cooperation is better." - but I can't bring myself to agree. The case of the chocolate donut is a misleading analogy, because ultimately the value of the donut is purely subjective. We aren't talking about competing for an astronaut slot on Space Station Alpha or the last ticket out of Ho Chi Minh city circa 1973. Its just a donut. The vast majority of Prisoner's Dilemma situations that we come across are of similar less-than-weighty import, and I think that there is a clear reason to defect rather than cooperate.I don't think you can unify the demands of morality (which are also subjective) with the simple strategy of playing a PD game. You have to choose. The PD almost always is in conflict with morality, and which choice you make is highly dependent on the desired outcome.
I find most of the attempts to unify morality and PD-strategy to be deeply flawed. These are orthogonal decision-making strategies.
The Platonia Dilemma.
I consider myself a reasonably intelligent guy, with rather strong opinions, and a sufficient amount of self-criticism. Note the presence of moderating adjectives in every sub-phrase - reasonably, rather, sufficient. Like a moth to flame, therefore, I am always drawn to writing by those I admire for the same qualities, only more so (without the adjectives). One of these is Kim, a dear ultra-liberal friend from college, still one of my closest (and always will be). Kim is formidably intelligent, with blazingly strong opinions, and fanatical self-criticism. I admire her greatly, because she tends to write things like this:I've gotten used to hearing Republican claims and assuming that the claimers are just lying scum out for their own advantage, or people who haven't considered the issues thoroughly. I think most are, but some might just be superrational people who see an economic model that could be workable in a platonic world, just as my very liberal view of people acting decently and working hard just because it's the right thing to do may not play so well in the real world...but it requires my adherence anyway. Might make for more interesting discussions with my right-leaning friends, and non-friends. It's interesting to see that I've been long arguing against voodoo economics based on its inapplicability in the real world (actual rich people are the ones best at never letting a dime escape their clutches, so what exactly would "trickle?") and getting very angry when people argue that in the real world, lots of people will refuse to do the right thing environmentally or otherwise, so good people get the short end of the stick. I argue, "so what" be ethical anyway, and deny that in the real world MOST people are evil. Well, I guess the superrational Reaganomics fans think that surely MOST rich people aren't evil, and enough of them would take their ill-gotten capital gains and do good with them. Interesting.
The reference to "super-rational" is from Douglas Hofstadter's book, Metamagical Themas, in relation to the Prisoner's Dilemma[1]. A super-rational (SR) player is defined recursively as one who assumes the other players are also superrational, and chooses to cooperate in order to maximize gain. Unlike the "iterated" PD, where the best strategy is "TIT FOR TAT"[1], Hoftstatder explored the "one-shot" PD, and tried to rationalize a strategy that favors cooperation rather than defection (p.730):
I found that I could not accept the seemingly flawless logical conclusion that says a rational player in a noniterated situation will always defect. In turning this over in my mind and trying to articulate my objections clearly, I found myself inventing variation after variation after variation on the basic situation.
This led Hofstadter to come up with what he called the "Platonia Dilemma", based on the following payoff matrix[2]:
(3,3) | (0,5) |
(5,0) | (1,1) |
and he then sent out a letter to 20 of his friends and acquaintances, chosen for their rationality and familiarity with the PD concept, including notables such as Martin Gardner and Bob Axelrod, whose own research (summarized in his amazing book, The Evolution of Cooperation) proved the superiority of the TIT FOR TAT strategy in the iterated PD case. The letter read (in part):
Each of you is to give me a single letter: "C" or "D", standing for "cooperate" or "defect". This will be used as your move in a Prisoner's Dilemma with each of the nineteen other players. The payoff matrix I am using is [see above].
Thus if everyone sends in "C", everyone will get $57, while if everyone sends in "D", everyone will get $19. You can't lose! And of course, anyone who sends in "D" will get at least as much as everyone else will. If, for example, 11 people send in "C" and 9 send in "D", then the 11 C-ers will get $3 apiece from each of the other C-ers (making $30) and zero from the D-ers. The D-ers, by contrast, will pick up $5 apiece from each of the C-ers, making $55, and $1 from each of the other D-ers, making $8, for a grand total of $63. No matter what the distribution is, D-ers always do better than C-ers. Of course, the more C-ers there are, the better everyone will do!
By the way, I should make it clear that in making your choice, you should not aim to be the winner, but simply to get as much money for yourself as possible. Thus you should be happier to get $30 (say, as a result of saying "C" along with 10 others, even though the D-ers get more than you) than to get $19 (by saying "D" along with everybody else, so nobody "beats" you).
Hofstadter set this PD up with a clear subtext that cooperation is preferred. Note that the payoff matrix does reward defectors, but only if there are very few. A SR thinker would presumably choose to cooperate to maximize the probability of a large payoff. His expectation was that most would choose to cooperate (p.746):
Any number of ideal rational thinkers faced with the same situation and undergoing similar throes of reasoning agony will neccessarily come up with the ientical answer eventually, so long as reasoning alone is the ultimate justification for their conclusion. Otherwise reasoning would be subjective, not objective as arithmetic is. A conclusion reached by reasoning would [then] be a matter of preference, not necessity. Now some people may believe this of reasoning, but rational thinkers understand that a valid argument must be universally compelling, otherwise it is simply not a valid argument.
If you'll grant this, then you are 90 percent of the way. All you need ask now is, "since we are going to submit the same letter, which one would be more logical? That is, which world is better for the individual rational thinker: one with all C's or one with all D's?" The answer is immediate: "I get $57 if we all cooperate, $19 if we all defect. Clearly I prefer $57, hence cooperating is preferred by this rational thinker. Since I am typical, cooperating must be preferred by all rational thinkers. So, I'll cooperate."
Italics are his emphasis, underlines mine. A clear flaw is the assumption that all players are SR. I have underlined the parts of his argument where this assumption is explicit. Another clear flaw is the assumption that the "throes of reasoning agony" will be correct - it is entirely possible for a rational thinker to simply be wrong. This can be to flaws in logic, omission/ignorance of key facts, or flawed assumptions.[3]
When Hofstadter tallied up the responses, he found (much to his chagrin) that there were 14 defections (each earning $43) and only 6 cooperators (each earning $15). This, despite the fact that he had unconsciously (?) "biased" the sample of participants by selecting his own friends and acquaintances based on his evaluation of their "rationality" - even people like Bob Axelrod and Martin Gardner who are intimately familiar with the PD and game theory (both of whom chose to defect, BTW). Hofstadter writes:
It has disturbed me how vehemently and staunchly my clear-headed friends have been able to defend their decisions to defect. They seem to be able to digest my argument about superrationality, to mull it over, to begrudge some curious kind of validity to it, but ultimately to feel on a gut level that it is wrong, and to reject it. This has led me to consider the notion that my faith in the superrational argument might be similar to a self-fulfilling prophecy or self-supporting claim, something like being absolutely convinced beyond a shadow of a doubt that the Henkin sentence "This sentence is true" actually must be true - when, of course, it is equally defensible to believe it to be false. The sentence is undecidable; its truth value is stable, whichever way you wish it to go (in this way, it is the diametric opposite of the Epimenides sentence "This sentence is false", whose truth value flips faster than the tip of a happy pup's tail). One difference, though, between the Prisoner's Dilemma and oddball self-referential sentence is that whereas your beliefs about such sentences' truth values usually have inconsequential consequences, with the Prisoner's Dilemma, it's quite another matter.
Again, italics are his emphasis, underlines mine. The key to undertanding why seemingly "rational" thinkers could take the same facts and arrive at a different result, is because reason is not logic. The mechanics and machinery of our intellect is ultimately Godelian - no system of analytical statements and rules and formal descriptors can ever fully model it. In fact Hofstadter himself in the same book devotes much time to emphasizing this, that ultimately intellect must be an emergent and statistical property - even creating a wonderful symbolic model called "The Careenium" (a great pun which I won't spoil. Read the book :) to illustrate it.
This is why self-examination is essential. Thought and analysis are ultimately founded upon "gut instinct" as much as pure facts and figures and logic - we cannot escape it. This also is an argument for diversity - because in combining the analyses of two people, who arrive at different conclusions from different facts, we are able to better triangulate the reality which underlies all of existence, towards which we all must grope towards half-blinded when alone.
Self-examination of the kind that Kim engages in so directly and willingly is essential to improving ourselves and the world. And the lessons of the PD are one such route to that goal. Ultimately, though, we do have to apply reason as we understand it, not as we think others do.
[1] Steven Den Beste had a good article on TIT FOR TAT in the iterated PD (which does NOT apply to the one-shot PD, of course).
[2] I assume the reader is familiar with the basic concept of the PD, the "payoff matrix" representation, as well as the terms "cooperate" and "defect" in that context. If not, I highly recommend Hoftstatder's book (Metamagical Themas) or good ol' Google.
[3] this paragraph is self-referential in classic Hoftstatder tradition :) .