Wednesday, April 27, 2011

Asking too much of morality


Andrew Sullivan has a pretty interesting response to Sam Harris' claim that we should not rule out torture as an ethical choice in certain extreme circumstances.

To me, I look at this sort of position--that there are some "ticking time-bomb" scenarios in which torture should be sanctioned--and I start to ask myself if we're asking more of morality than it can give us. After all, human society is a big, complex, organic system. These kinds of systems defy description by analytically rigorous and consistent first principles--you see this all the time in our efforts to model or duplicate things with computers: the more organic something is, the harder it is to duplicate. Figuring out chess moves and modeling proteins comes easy, but things like natural language and even just simple walking turn out to be the difficult problems. With this in mind, it seems weird to me that we would think that there would be an analytic, derived-from-first-principles ethical system that perfectly provides a solution to every possible human choice and dilemma. In other words: even given that there is one true morality, it's weird to think that this morality is complete in the sense of the term used to describe algorithms (an algorithm is complete if it can find every possible solution).

What this means in practice is what we also find to be pretty obviously true: though moral principles guide us through life in general, they don't perfectly apply to every situation, and sometimes remain maddeningly silent on the best course of action. We tend to think of a "moral dilemma" as something we can find our way out of by applying some code of morality ever more judiciously, but it seems to me what causes the dilemma in the first place is the absence of clearly applicable morals. When we're in a dilemma--like the ticking-time bomb scenario--we're beyond morality's ability to help us. There is no "morally right" choice--there's just muddling through.

Note that what I'm saying here isn't that morality is relative or anything like that. It's as universal as you like. It's just that it's incomplete. We're used to the idea that for every choice, our morality can tell us which one is least bad--that morality always shows us the way out. But maybe there are times when there is no way out. There are times when you are--for lack of a better word--fucked. There are times, for example, when you are President and morality tells you that it is wrong to incinerate thousands upon thousands of innocent Japanese people, and yet you do it anyway. Morality tells you to do A, and yet no one in their right mind would do A. Does this mean the moral code that told you to do A can't be right? Only if you assume that the right moral code must be complete. But if you do away with that, another answer suggests itself: you had been ethically checkmated. There was no way out. You were simply fucked.

If this sounds like defeatism, I think things make a little more sense when we think about the crucial role that wisdom must play in exercising morality. Sometimes you end up in a dilemma--an ethical cul-de-sac--out of bad luck. Ask any hard-boiled noir detective, he'll tell you all about it. But sometimes you end up there because of your own foolishness and immorality. (Or maybe it's a little of both.) You can't just run amok and expect to be able to consult your little book of moral rules whenever you have to make a decision. You need to have the wisdom and foresight to avoid getting trapped in those cul-de-sacs in the first place. If you're in a ticking time-bomb situation, the real question isn't what the moral thing to do is--morals don't matter, you're going to torture the guy no matter what--the real question is, how did we get here? This guy's about to nuke a city. Where was the anti-nuclear proliferation strategy that would have stopped this from happening? What about other security measures? Why is this guy so pissed off that he wants to blow up a city?

To me the real sign that a code of morality is correct is that when you zoom out to the macro level and consider the events leading up to these impossible, no-way-out quandaries, it turns out that morality and wisdom work in harmony. It turns out that, if we had only adhered more closely to our code of morals, it would have led, in the long run, to wiser choices, and we would have avoided the ethical cul-de-sac altogether.

A moral code giving you the wrong answer in a highly specific, constrained, and contingent scenario doesn't necessarily mean that it isn't the right code; it could just mean that, through some combination of immorality, foolishness, and plain bad luck, it's already too late and there's no way out. You muddle through as best you can and at the end someone mutters in your ear, "Forget it, Jake. It's Chinatown."

1 comment:

Alex said...

This is awesome. Way to unify.

I remember a conversation I had with a friend of mine a few years ago, regarding ticking-time-bomb-torture. We were discussing the legality, not the morality, but there's some crossover. My opinion was that torture should never be legal, but that if the situation really demanded it, someone in the right position should just do it anyway. If the situation were really so dire, shouldn't there be at least one person willing to risk the consequences to their own life? The idea to me was that the legal code should embody the principles that we stand by, but not literally define the bounds of our actions. That's one of the advantages of living in a lenient, humanisitic government: we leave room for the fallibility of the laws themselves.

Anyway, he thought that this was bizarre, to consciously build a set of rules into our government that would `enforce' the wrong thing in certain explicit circumstances. But I can't believe I never thought of the analogy to incompleteness. That seems so obvious now. Morals are basically a system of axioms from which right/wrong statements are meant to be inferrable. If anything, a moral system seems to be an augmented sort of mathematical system, one that requires the ability to make essentially any logical claim, as the basis of the normative claims. But already here, we're screwed! Who knows what kinds of crazy unprovable moral statements you can construct, now that we have all these additional primitives.

Admittedly, that's probably not the kind of thing you're talking about. Whether to torture in a scenario probably isn't some subtly constructed self-referential statement incapable of proof under any set of moral axioms. Even still! It's an interesting way to think about morality. Like you said, it's not necessarily the case that morality is relative - you might think that there's a `true' set of axioms - but just that it doesn't, and couldn't, cover everything. Which basically amounts to saying, if you believe that those axioms are correct, that certain situations really don't have any moral value at all.