Interlocutor: Interesting defense of Double-Effect (although it hardly follows from something’s being innate that it’s true), but the principle is completely irrelevant here. It’s true that destroying the planet would be an action, whereas allowing the planet to evolve would be a “letting-happen.” But this is an action that wouldn’t harm anybody! Double-Effect only comes up when the action under consideration entails a consequence that would be impermissible if intended directly—an action that causes harm. Taking action that prevents harm without causing harm ought to be a moral no-brainer.
But if I blow up the planet, I’ll prevent the occurrence of a lot of good! Think of the frolicking squirrels that never will be, the great art, true love, ice cream (surely they’d invent ice cream!).
OK, enough! Here’s where I’ve landed. I have a set of strong intuitions, none of which I’m prepared to give up, but that seem to entail a judgment about a hypothetical case that I cannot seem to accept. What do I do with that? There’s a name for the situation: I’m experiencing “moral dumbfounding.”
Moral Dumbfounding and Philosophical Humility
by Louise Antony
Interlocutor: Oh come on. It wouldn’t be OK if you did it on your own, but having—what, 100?—other people agreeing with you would make it OK?
The Star Wars Consideration. I am not Darth Vader; I’m on Team Leia.
But I probably won’t stop trying to figure this out, and neither should they (mutatis mutandis). That’s philosophy, and it’s for everyone.
What John’s ingenious puzzle (thanks, John, I suppose) has shown me is that I need to have the humility to recognize that, in this case, I have not found that truth, and that I may not ever find it. And it has also shown me that I need to be more generous to people who are dumbfounded by cases where I happen to have clear and consistent intuitions. This includes students who don’t know what to make of the aforementioned violinists and trolleys, but also fellow jury members, political opponents, vaccine skeptics—you know, other people.
And yet, surprisingly, I found myself quite reluctant to say that I would destroy it. Hmm. What was going on?
Interlocutor: Well, two things. First: we can stipulate that the killing of any living thing counts as harming that thing, but then it would be a separate step to say that all harming is morally wrong. And you, Louise, don’t accept that step! You take antibiotics when the doctor tells you to, and you don’t worry about the “harm” you’re doing to the bacteria!
Second: you are coming dangerously close to endorsing a view that you really don’t want to endorse, viz., that there is some inherent purpose in the universe as a whole. You don’t believe in purposes that are unconnected from creatures who have purposes. Of course, some will say that the “purposes” in question are determined by God. That’s OK for them, but you don’t believe in God. But come to think of it, it’s not OK for them, either: the thing about God’s purposes is that it is not always obvious what God’s purposes are; blowing up the planet could be exactly what God would like me to do. This same epistemological problem—which one would have to solve in order to solve the moral problem—besets any naturalistic translation of talk of “purposes.” Discovering that there is naturalistic value in the universe doesn’t tell us the specific value of that planet’s being there. So this is all going nowhere.
Interlocutor: All right, now we’ve come down to it. Despite your protestations that you don’t think it’s morally permissible to “balance” the goods of this Earthly life against the suffering of sentient creatures, it’s now emerging that you do think that the existence of some goods might somehow warrant the suffering it would entail in order for them to be realized.
Let’s get back, then, to the basis of your original intuition: What bothers you about many theodicies is the utilitarian assumption at their hearts: that the suffering of X could be justified by the good it provides for Y. Pace Derek Parfit, you don’t think that interpersonal tradeoffs between suffering and happiness are morally equivalent to intrapersonal tradeoffs across time. Moreover, the moral acceptability of tradeoffs of either kind often depend upon the fact that the agents in question have agreed to the trade. I agree to suffer reading through that last draft of your paper on conceptual engineering for the sake of your philosophical development—whether rational or not, that’s my choice. But the once-frolicking, now-terrified squirrel stuck in the jaws of a ravenous fox (whose own frolicking had just hours before provided me with enormous entertainment—video upon request) has hardly agreed to the arrangement.
Interlocutor: You’ve got to be kidding.
It struck me that, given the principle implicit in my view of theodicies, it was pretty obvious I ought to say that I was not only permitted, but even obliged to destroy the planet. After all, I had committed myself to the view that the amount and intensity of suffering that has transpired on Earth is sufficient to have made the creation of the world morally wrong; oughtn’t I to prevent such suffering if I were able to?
The following is a guest post* by Louise Antony, Professor of Philosophy at the University of Massachusetts, Amherst. It is the fourth in a series of weekly guest posts by different authors at Daily Nous this summer.
Suppose that you knew of a planet, very much like Earth, that was in the very earliest stages of the evolution of life, with maybe just some microbes, but with no lifeforms even close to sentient. Suppose, further, that it was possible for you to figure out—from its extreme similarity to Earth at the same age—that in all probability, sentient life would evolve, much as it did on Earth. Finally, suppose that you were in a position to destroy the planet before any further evolution occurred. Would you be morally permitted to do it? Would you do it?
A few weeks ago, at an excellent conference on the epistemology of religion at Rutgers University, I had a terrific conversation with John Pittard. I had mentioned to him, in an offhand way, that my overall reaction to best-of-all-possible-world theodicies was this: if the creation of our universe really necessitated the amount of suffering experienced by sentient creatures on this earth, then God would have had a moral obligation not to create it at all. John then posed the following case to me:
“I need to have the humility to recognize that, in this case, I have not found that truth, and that I may not ever find it. And it has also shown me that I need to be more generous to people who are dumbfounded by cases where I happen to have clear and consistent intuitions.”
Maybe I’m being too parochial about what constitutes “harm.” I have been assuming that the existence of harm depends on there being sentient creatures whose desires or interests are thwarted in some way. But maybe I’m confusing “harm” with the experience of harm. I don’t think that mosquitos feel pain, and that’s why I have no compunction about swatting them. But surely it harms a mosquito to kill it.
Something to do with the “doing vs. letting happen” distinction? It’s one thing to not create a planet; it’s a different thing to destroy one. Now I know that it’s controversial whether this distinction is morally important. Consequentialists generally argue that it’s not, and try to explain away the intuition that it is, in terms of human squeamishness about “up-close and personal” interactions. I don’t buy those explanations. Experiments by Katya Saunders with young children indicate that kids mark the doing/allowing-to-happen difference, and its moral significance, in situations where there’s no physical contact with anybody. And John Mikhail has presented evidence that the Doctrine of Double-Effect is part of a humanly innate “moral grammar.”
(PS: Thanks, for real, to John Pittard for truly edifying conversation about this and other matters. He is not responsible for anything in the above that is stupid or wrong.)
I bothered John for most of the rest of the conference musing aloud about this. (I’m sure he regretted ever bringing it up.) What I came up with was absolutely nothing that philosophically justified my hesitation. What the exercise taught me was a lesson in intellectual humility—and just possibly—a new sympathy for the kind of dumbfounding I suspect many of our students experience when faced with a thought-experiment that they cannot rationally critique, but that seems to carry them in a deeply wrong direction.
So, with your indulgence, let me set out some of the factors that popped up when I looked within, along with the reactions of interlocutors, inner and outer.
This just seems like too much responsibility. I’d feel better about destroying the planet if there were some sort of collective decision about it.
Let’s back up—do I really have all the pertinent facts here? How can I be sure that life on this planet will evolve? Or that sentient creatures, if and when they turn up, will experience suffering? And also, how do I know that the explosion wouldn’t hurt other sentient life somewhere else in the universe?
Interlocutor: So now you are engaging in what you disdainfully call “philosophy-avoidance” when your students do it—you are refusing to accept the stipulations of the thought-experiment instead of grappling with the philosophical challenge it poses. Remember this the next time you bring up anemic violinists or out-of-control trolleys.
Philosophers who have discussed this condition disagree what to say about it: some think that the moral answers in these cases are obvious and that those of us who cannot accept them suffer from a psychological shortcoming that we should get fixed. Some philosophers think that moral dumbfounding shows that there is no objective answer to the moral problems that occasion it (or that there are no objective answers to any moral problems.) I don’t like either of those strategies. I’m a moral realist; I think the truth is out there.