Wednesday, April 27, 2011

Asking too much of morality


Andrew Sullivan has a pretty interesting response to Sam Harris' claim that we should not rule out torture as an ethical choice in certain extreme circumstances.

To me, I look at this sort of position--that there are some "ticking time-bomb" scenarios in which torture should be sanctioned--and I start to ask myself if we're asking more of morality than it can give us. After all, human society is a big, complex, organic system. These kinds of systems defy description by analytically rigorous and consistent first principles--you see this all the time in our efforts to model or duplicate things with computers: the more organic something is, the harder it is to duplicate. Figuring out chess moves and modeling proteins comes easy, but things like natural language and even just simple walking turn out to be the difficult problems. With this in mind, it seems weird to me that we would think that there would be an analytic, derived-from-first-principles ethical system that perfectly provides a solution to every possible human choice and dilemma. In other words: even given that there is one true morality, it's weird to think that this morality is complete in the sense of the term used to describe algorithms (an algorithm is complete if it can find every possible solution).

What this means in practice is what we also find to be pretty obviously true: though moral principles guide us through life in general, they don't perfectly apply to every situation, and sometimes remain maddeningly silent on the best course of action. We tend to think of a "moral dilemma" as something we can find our way out of by applying some code of morality ever more judiciously, but it seems to me what causes the dilemma in the first place is the absence of clearly applicable morals. When we're in a dilemma--like the ticking-time bomb scenario--we're beyond morality's ability to help us. There is no "morally right" choice--there's just muddling through.

Note that what I'm saying here isn't that morality is relative or anything like that. It's as universal as you like. It's just that it's incomplete. We're used to the idea that for every choice, our morality can tell us which one is least bad--that morality always shows us the way out. But maybe there are times when there is no way out. There are times when you are--for lack of a better word--fucked. There are times, for example, when you are President and morality tells you that it is wrong to incinerate thousands upon thousands of innocent Japanese people, and yet you do it anyway. Morality tells you to do A, and yet no one in their right mind would do A. Does this mean the moral code that told you to do A can't be right? Only if you assume that the right moral code must be complete. But if you do away with that, another answer suggests itself: you had been ethically checkmated. There was no way out. You were simply fucked.

If this sounds like defeatism, I think things make a little more sense when we think about the crucial role that wisdom must play in exercising morality. Sometimes you end up in a dilemma--an ethical cul-de-sac--out of bad luck. Ask any hard-boiled noir detective, he'll tell you all about it. But sometimes you end up there because of your own foolishness and immorality. (Or maybe it's a little of both.) You can't just run amok and expect to be able to consult your little book of moral rules whenever you have to make a decision. You need to have the wisdom and foresight to avoid getting trapped in those cul-de-sacs in the first place. If you're in a ticking time-bomb situation, the real question isn't what the moral thing to do is--morals don't matter, you're going to torture the guy no matter what--the real question is, how did we get here? This guy's about to nuke a city. Where was the anti-nuclear proliferation strategy that would have stopped this from happening? What about other security measures? Why is this guy so pissed off that he wants to blow up a city?

To me the real sign that a code of morality is correct is that when you zoom out to the macro level and consider the events leading up to these impossible, no-way-out quandaries, it turns out that morality and wisdom work in harmony. It turns out that, if we had only adhered more closely to our code of morals, it would have led, in the long run, to wiser choices, and we would have avoided the ethical cul-de-sac altogether.

A moral code giving you the wrong answer in a highly specific, constrained, and contingent scenario doesn't necessarily mean that it isn't the right code; it could just mean that, through some combination of immorality, foolishness, and plain bad luck, it's already too late and there's no way out. You muddle through as best you can and at the end someone mutters in your ear, "Forget it, Jake. It's Chinatown."