Previous Posts

Return to Super-Rational


I choose to cooperate.

I do so not because I think it'll necessarily bring me the greatest benefit in any given instance but because it's the right thing to do and I was taught to do the right thing.

I must grant the possibility (to Kim & Nigel) that if one were to psychoanalyze me, I may only be "doing the right thing" even if it doesn't appear to be in my self-interest by subconsciously performing the hedonistic calculus that puts "feeling superior" for having done the right thing on the positive balance side to compensate for whatever exterior loss I may be facing, meaning that I'm still acting purely in my own self-interest, just with my own, personal, internal, subjective pay-off matrix. It's quite hard to say.

It's also possible that I'm choosing to cooperate not based on any extensively reasoned basis at all. As Daniel Dennett argues compellingly in Darwin's Dangerous Idea, Herbert Simon's method of "satisficing" is the only practical way for a limited being such as a human to come to a decision in the first place. Reasoning through all the possibilities is, for most purposes, impossible. There's simply not enough time to calculate all the effects of a particular line of conduct. Humans have therefore developed a lot of shortcuts and "conversation stoppers" to end discussion and choose a path. As Aziz says, (human) reason is not logic. Suboptimal as it may be, our shortcut, satisficed decisions are "close enough" and better than the paralysis by analysis that gives us no decision at all. (Sabertooth tiger approaching. Hmmm. If I were to climb a tree I could escape being eaten which would be good? Which tree? Well, that one's too small. That one's too slippery. If only I had a large club with me. Perhaps I could fashion one out of some nearby debris. What if we invented a sort of poking device with a long stick and a sharpened rock? Eaten! Should’ve just RUN!)

Morality, a set of prescriptive rules handed down from generation to generation, is one of those satisficing intellectual shortcuts, a conversation stopper that keeps us making fairly successful decisions without laboring over them each time. It's what gives me that sense of "right" that impels me to cooperate. However, if we accept the Darwinian framework, that I was taught that cooperating was "the right thing" is in need of some explanation. For survival purposes, one would expect individuals to always act only in their own self-interest, which would on the surface imply non-cooperation. However, as Robert Axelrod has so elegantly shown using the iterated prisoner's dilemma (IPD), cooperation is an evolutionarily stable strategy that explains the existence of "altruism" on the very basis of self-interest. It is, as Kim says, the long view of "enlightened" self interest.

In the single-shot PD, however, the opposite is true and self-interest favors defection. In the everyday course of life, however, we more often encounter variations of the IPD, so we should be more inclined to act in accord with the "tit-for-tat" strategy of cooperate but punish defections than with the purely selfish calculus of the one-shot pay-off matrix. One of the rules of that tit-for-tat strategy is to cooperate first, only defecting to punish past defections. Perhaps that's another reason why I choose to cooperate. While I may consciously know that I would be better off financially defecting in a one-shot PD situation, do I really know it's a one-shot PD? Even if I'm guaranteed never to meet the same actors in the same situation again, who's to say that I won't face-off against someone watching this match in the future? People have long memories and don't often forget cheating. In an IPD world, grudges have useful survival benefit.

The reason Hofstadter's colleagues disappointed him in his Platonia Dilemma experiment is that, unfortunately, we don't live in Plato's perfect world of forms. People are not perfect. We are not (all) superrational. We make decisions by satisficing. As Aziz says, it's entirely possible for even rational thinkers to be wrong. This mental defect (side effect of being limited beings, really) is a basic assumption that must be included in any discussion regarding human action. When we do include this assumption and recognize not everyone will reason the same way we do, we end up with the "reverberant doubt" that causes defection in the Platonia Dilemma: Someone's going to defect. Someone's going to cheat! Fully understanding the implications of that assumption reveals the foolishness of both the trickle-down economics that Kim rails against and the expectation that any other group will consistently "do the right thing" when it's not in their self-interest (economic or otherwise) to do so. (Kim & I being the superrational exceptions to the rule. Or more likely, just the exceptions to the rule based on our personalities or the good job our parents did teaching us right from wrong or some other factor nothing to do with Hofstadter's notion of superrationality.) But regardless of how many "good apples" cooperate, someone's going to cheat.

Given that, the answer to the conflict between morality and self-interest is not to fight human nature but to change the rules of the game. Aziz is correct when he says the demands of the PD and morality are often in conflict.* In particular, they are in conflict in single-shot PD scenario. So, we need to avoid single-shot PDs whenever possible. We must avoid "tragedy of the commons" situations (whose pay-off matrix is identical to the PD and results in the same kind of defection) by imposing external regulations or costs that change the matrix to favor cooperation. This is how we protect the "do gooders" from the less scrupulous "bad apples" who act only in their short-term self-interest. We invent strategies that maximize cooperation for non-zero-sum benefit. We carefully constrain zero-sum interactions. We find the defectors and punish them.

Of course, that leaves open the question of what we should do when we find ourselves in a single-shot PD situation. How do we act when they are in direct conflict and we don't have recourse to external structural changes to change the pay-off? How do we choose between "doing the right thing" and acting in our own self-interest? Between morality and the PD pay-off matrix?

What do we do? We satisfice. Aziz defects, knowing the cruel logic of the PD. Kim & I cooperate, assuming either the hedonic benefit of the high moral ground or slavishly following our internalized morality or allowing for the more pragmatic possibility there will be another match to play and what now appears a single-round PD is in fact an IPD in a larger context.

Given the particular situation and the particular pay-off, I expect that any of us may flip-flop our decision for one (satisficed and limited) reason or another.

*Postscript: While I agree that in the single-shot PD, the PD pay-off and demands of morality are in conflict, that's not the case in the IPD. I'm not sure what Aziz is referring to when he finds "attempts to unify morality and PD-strategy to be deeply flawed." The analyses I've read from Axelrod (cited by Aziz), reiterated and expanded on by Robert Wright (The Moral Animal, Nonzero) are explanations of the appearance of cooperation and morality based on the survival advantage granted by such win-win behavior in the commonly occurring IPD scenario. I've not heard of anything that equates defection in the single-shot PD with moral choice of "doing the right thing", which is what Aziz seems to imply when he calls them orthogonal decision-making strategies

I also would quibble with Aziz's assertion that the demands of morality are subjective. While I can certainly agree that the body of rules handed down from generation to generation are subjective in their purported moral content, there is a sense in which, to the extent they confer survival benefit, support the growth of life, and open up potential, they have "objective" value. I believe that the naturalization of ethics through the IPD points a way to an non-subjective morality, but that's a matter for a different blog.

permalink | posted by Keith Gillette | 4.05.2004 |

Discussion on I choose to cooperate: 0 Comments

Post a Comment