Saturday, July 9, 2011

Is ignorance the worst evil?

If we care about the common good, then reason will clearly tell us what moral code to follow: it will tell us to follow the rule utilitarian moral code. But if we have no concern for the common good, then reason cannot tell us to follow this moral code (or any other).
Those are the concluding words of a paper in which John C. Harsanyi, one of the greatest economists and philosophers ever, summarized his groundbreaking game-theoretic research in moral philosophy. It probably was not his intention, but those words I think sum up the reason why all of the moral codes we have ever evolved are essentially irrational. A rational code of ethics, whether utilitarian or some other type, is a luxury that we as species simply could not afford, because reason can guide moral choices only in situations when everyone involved "cares about the common good". For beings who all want to be good but cannot reach a consensus as to exactly what kind of actions make good things happen, rule utilitarianism is indeed the answer. In this type of world, the worst evil is caused by lack of knowledge as to how various possible rules constraining individual behavior affect that behavior in the long run. Religious dogmas are a good example of such ignorance, as are almost all other moral systems of the deontic variety (i.e. those positing that following moral rules is a value for its own sake). In Harsanyi's world, moral rules evolve not because people think they are an unqualified good, but because too much utilitarianism can be a bad thing. If I gain more utility out of using your car than you lose because of my stealing your car, then simple-minded utilitarianism says it's OK for me to steal your car. This leads to less utility in the long run than we'd have in a world where it's not OK to steal a car from someone just because you think you'd enjoy it more than the owner. Hence we should have a moral rule saying it's not OK to steal.

Such concerns, however, are not the main reason why moral rules evolved in our world. In our world, the worst evil is the result not of ignorance but of deceitful malice. In our world, there are those who not only have no concern for the common good, but who actively seek to hurt others while pretending to be good. Harsanyi's world is a coordination game, or at worst a stag hunt, whereas the real world is a prisoner's dilemma. In a repeated prisoner's dilemma, bright line rules are the only way for a society of cooperators to protect itself from cheaters, even though bright line rules have dire side effects such as unforgiving dogmatism and lynch mob mentality. In our situation, those consequences are a price worth paying. Our ethics are indeed irrational; but it's not because we lack insight, but because the worst enemies of our society are not "moral imbeciles" but moral predators.

4 comments:

  1. Okay, so just to follow what you say explicitly. You say:

    "For beings who all want to be good but cannot reach a consensus as to exactly what kind of actions make good things happen, rule utilitarianism is indeed the answer."

    So, those beings referenced above are the same ones who do not reach a consensus on the common good, right?

    In the rule utilitarian world, the worst kind of evil is essentially short term interests that have disastrous long term impact, right?

    Okay, so far I think I follow. Now, you say directly after the above:

    "In Harsanyi's world, moral rules evolve not because people think they are an unqualified good, but because too much utilitarianism can be a bad thing."

    In Harsanyi's world, then, as opposed to a rule utilitarianism world, what is bad isn't the long term outcomes but instead, short term or medium term results from utility producing acts (like stealing a car).

    Okay, so far so good, I think. Except that next you say: In "our" world, those concerns aren't the main reason why morality evolved. So, it must be that "our" world is different than H's world and different than a rule utilitarian world? Why?

    ReplyDelete
  2. So the perverse beings who know that they live largely in a utilitarian world, can predict other people's behavior to a large extent, and therefore further their ends more than otherwise (if all others were perverse). The trick is that we, non-perverse and knowledgable of selfish utilitarians who we can sympathize with, do not know who we deal with unless or until they reveal behavior to us that doesn't fit one of the structures above, in which case we can potentially see them as perverse. BUT, and here's my main concern: how do we know that we just don't have enough information to understand their behavior from a utilitarian perspective. Maybe we're missing half the picture and that's why it doesn't make sense?

    ReplyDelete
  3. Okay, I thought you might say that. Here's my objections: if we treat people like they're predatory, then we have no basis for treating them at all (because they have no basis for their acts!) until or unless we learn each predator's particular fetish, so I'm very worried that once we start treating everyone as predatory, we become ourselves predatory. Is this a moral and not evolutionary distinction? Can you elaborate please.

    ReplyDelete
  4. "I'm very worried that once we start treating everyone as predatory, we become ourselves predatory."

    Not necessarily, although it depends of course on what we mean by being predatory. If we regard someone as predatory, it doesn't have to mean we have no basis for treating them. I may have given the impression that I though moral predators (as opposed to "selfish utilitarians") were irrational. That's not the case. Predators are certainly rational, given their, to put it mildly, unorthodox preferences. The thing is that, when they are allowed to act on those preferences unchecked, their behavior is extremely destructive, so much so that it will derail any attempts at establishing a reasonable social contract.

    But they are rational, and as such they respond to incentives. Such as, for example, credible threats of punishment. So if a society evolves a norm of hypervigilant social punishment, predators can be curbed. And if the threat of punishment is credible, such society can actually be peaceful--since those who are tempted to transgress norms know that their transgression will be punished, they don't transgress and no punishment is necessary. But it does require a strong norm of social punishment. That is, cooperators must feel obligated to punish not just those who broke a promise they made to them personally, but all those who broke any kind of promise to anyone, and also those who at any point failed to punish those who broke a promise to someone. In a utilitarian world, a tit-for-tat strategy (I am nice to those who are nice to me and nasty to those who are nasty to me) would be stable. In the presence of predators, tit-for-tat is not enough. Everyone needs to monitor people's behavior not only towards them but towards everyone else. Or, equivalently, act as if they believed that moral laws such as keeping promises is a good of infinite value. If such norm evolves, predators' behavior can actually be curbed and without much (if any) violence. Of course you may be arguing that there's something predatory about social punishment itself, in which case I agree.

    ReplyDelete