Previous Posts

Return to Super-Rational

 

Enlightened self-interest.

It seems to me that Hofstadter was missing a piece, or else that decisionmaking of this kind is based upon iterative layers of rationality. I used to debate with my good friend Nigel about why people choose to do "good." His theory was that doing good resulted in either material or mental benefits to the do-gooder, and so the behavior is inherently selfish. His take was that all behavior is inherently selfish at some level. My take then was that people choose to do good because of an adherence to an ethical or moral structure. (let me pause and define, for the sake of argument, by "good" here I mean "selfless" and that the acts in question may or may not actually result in good to others, but that the do-gooder must believe that they would and that must be the motivation for the act)

What I now believe is that adherence to an ethical or moral structure, and choosing to act selflessly in conformance therewith, is just a long-view "enlightened self-interest." On the other hand, I believe as well that some people adhere to their ethics and morals, benefiting others far more than themselves, even without the slightest hope that others will do the same. Some people are willing to "prime the pump" of social behavior for their whole lives, and get shafted as a result. So perhaps those people are engaging in species/genetic enlightened self-interest by defining "self" as broader than the individual human in question.

It's difficult to know if I'm conveying what I'm thinking, so I'll use an illustration. I walk into a donut shop and choose between a glazed and a chocolate glazed donut. If there are ethical implications to that choice, they're difficult to discern. If, instead, I'm neutral as to the choice, but the person behind me is telling her friend that she's been waiting all day to get a chocolate donut, and I see that there is only one left, I have a choice to either benefit her and choose the other donut, or harm her without benefiting myself. That one is easy. If, instead, I'm partial to the chocolate myself, I have a choice. Do I choose to satisfy my desire at the expense of the person who got there after me, or do I choose to leave her the donut anyway? No OBVIOUS benefit can accrue, since she probably won't know I've done her a favor. By Nigel's way of thinking, someone with a particular social bent that informs their ethics would leave the chocolate donut, and do so for the following reasons: 1. very little harm to self, moderate benefit to other; 2. get pleasant feeling and boost to self-worth for doing something "good." I see a third reason. 3. Do unto others as you would have them do unto you. A very old formulation, even an "old saw" that we teach to children as a rule to live by...but maybe there's more to it than that.

Unquestionably in these one-shot Prisoner's Dilemma situations, cooperation is better. In real life, cooperation is better too. The truth lies in a bunch of tired old sayings. The ones about one person making a difference, or every good movement starting with one person...those aren't just optimistic patter, those are true. In every case of a good idea that caught on, someone had to start. Each individual has hundreds or more choices a day to make, and many of those choices will be ones with impact on other people. In most of those cases, the choice will be between doing good for others, and not doing good for others. Many times doing good for others means an apparent direct harm to self, and a difficult-to-discern long-term benefit. Often those long-term benefits are bad bets.

Tradeoffs are present in every area of life. Currently as a nation we're engaged in discussing security and freedom, because those things are only marginally compatible. Some of us see no point in security if our freedom is sacrificed. Others see compromises to be made for those less-used freedoms. This works the same way. It's easy to make choices to eschew direct benefit when the return from indirect benefit is likely to be high. In other words, if I belong to a small group of friends, and we like to go out to lunch a lot, and we each pick up the tab once in a while, it's nice, feels good and friendly, lets us have a free lunch those times when we're feeling poor, lets us feel generous those times we're feeling well-off, and saves us all the hassle of splitting the bills. In a group like that, defection is unlikely, and everyone feels that they're probably coming out just fine in the end. In larger groups, defection becomes more likely. In vast groups like cities, you get a bunch of people watering their yards in a drought because A. "other people are, why should my lawn suffer?" or B. the net effect is large, but the individual effect is miniscule, or "what can it hurt, if we're all going to go thirsty, my ten gallons of lawn-water won't make a difference."

So when defection is common, and the group is large and somewhat faceless, the decision becomes one that seems to come down to arbitrarily adhering to an ethic of no apparent self-benefit or choosing the direct self-benefit. If I shop at Wal-mart, I support several conservative causes and several conservatives, that/who I believe are abhorrent. I can choose to go to a local store, benefit a local retailer, support my local economy, and not support school vouchers. I pay a little more to do that, and I may experience a little inconvenience (maybe several stops instead of one). The ethic is obvious here, but it's still very tempting to "defect" and join the masses at Wal-mart. My handful of money doesn't make much difference, but a LOT of people's handfuls of money make a big difference.

So the decision is between two "layers" of rationality, and/or the belief that ethical behavior is deserving of adherence because once you know the "right" thing to do, the "do unto others as you would have them do unto you" is ultimately a "winning" strategy, at least as good as the one that tries to gain every direct benefit. Does it result in the greatest return in the "real world?" That's hard to say. Each decision is unique, and each person must make each decision on an individual basis. Those decisions that leave me shaking my head (eschewing a very large direct benefit, or gambling on a losing long-term benefit) are difficult to make, but then when I manage to "do the right thing" anyway there are some comforting things to consider. I'm a little supersitious. When I flip through the channels on TV, I will sometimes turn off something that attracted my attention simply because it's SO bad (thank you Fox) that I'm appalled that it would be on. I might be interested in watching it, but I superstitiously believe that if I turn it off, then it's more likely that OTHER people will also be turning it off at the same time. As if I can shift the statistics all myself. If I turn something off, then there are probably a bunch of people who are similar to me, who might then also be turning it off, not because I influenced them, but simply because there's some likelihood that if one person does something and a lot of people are in a position to do the same thing, that more than one person will probably be doing it. Along the lines of the way politicians tally letters and emails. If one person writes a letter, the politician assumes that a certain arbitrary number of others probably felt the same, but just didn't write.

Choosing not to participate in something "everyone does" because it appears to have negative social value is a reasonable decision, even if it has direct harm and little likely indirect benefit. White plantation owners who freed their slaves before the civil war suffered financial and social harm to a huge extent, but they did the right thing, with little likelihood of indirect benefit. That was a large decision. Our lives are mostly composed of small decisions. How much easier, then, to laugh, give up the direct benefit (the "rational choice" to defect) and just do the thing we believe to have the most benefit to society and self if everyone else were to make the same decision? Why play chicken? Why live our lives in an arms race trying to acquire the most, or to prevent anyone else from acquiring more? How does that benefit us? In the final analysis, Nigel may have hit the nail on the head, why not do good for the mental benefit it brings just KNOWING you've done good? Maybe that's enough to compensate for the loss of some other benefit in most cases.

permalink | posted by araven | 7.07.2003 |

Discussion on Enlightened self-interest: 0 Comments

Post a Comment