Less Utilitarian Than Thou
I sometimes identify (and get identified by others) as utilitarian or consequentialist. It’s a fair descriptor. I think about morality in terms of how to decrease suffering / fulfill preferences / other stuff which is surprisingly hard to specify.
Sometimes utilitarianism is conceptualized as “being willing to do bad things for the greater good”, so it always surprises me how much less willing I am to do this than most people. Here are some things that many non-utilitarians believe are okay, but which I’m against or at least skeptical of1:
-
Banning “misinformation” or “hateful speech”. This violates the usual moral rule of free speech, to serve the supposed greater good of preventing the spread of bad ideas.
-
Forcibly separating children from their families and confining them in a space they’re not allowed to leave (ie mandatory public schooling). This violates the usual moral rules against separating families and imprisoning innocent people, to serve the supposed greater good of enculturating or educating the kids.
-
Spinning a narrative that plays fast and loose with the truth, in order to avoid “panic” or empowering “the wrong people” - for example, trying to play down concerns about COVID because that might incite mobs to attack Chinese people. This violates the usual moral rule against deception, to serve the supposed greater good of preventing the panic.
-
Holding protests that block traffic, damage property, or harass people. These violate the usual moral rule against inconveniencing people, to serve the supposed greater good of raising awareness of a cause.
-
Shaming, insulting, and doxxing people on the “wrong side” of an issue. This violates the usual moral rule against bullying, to serve the supposed greater good of discouraging people from taking the “wrong side” of an issue.
These all seem like bright-line cases of violating a sacred principle for the greater good, but for some reason the people worried about “utilitarianism” and “the greater good” never talk about them.
Meanwhile, when I get called a “utilitarian”, it’s most often for wanting policies like these:
-
Letting people get paid to donate their organs to solve the organ shortage.
-
Supporting people who want to earn more money (ethically and legally) and donate it to charity.
-
Allowing (voluntary) genetic engineering and embryo selection to prevent genetic disease.
-
Slashing pharmaceutical regulations that kill more people than they help.
-
More focus on preventing existential risks that could kill billions of people
I can sort of see why people think these have a vibe of “greater good” reasoning around them. Voluntary organ donation is a slippery slope to coerced organ donation; earning money ethically to give to charity is a slippery slope to earning it unethically; genetic engineering isn’t necessarily unethical but it’s at least creepy. Still, these are at best sort of vaguely connected to the idea of violating ethical rules for the greater good - which makes them much less bad than the first list, which break bright-line rules and directly use greater-good reasoning.
So why do people think of utilitarians as uniquely willing to do evil for the greater good, and of normal people practicing normal popular politics (like the items on the first list) as not willing to do that?
I think people are repulsed by the idea of calculating things about morality - mixing the sacred (of human lives) with the profane (of math). If you do this, sometimes people will look for a legible explanation for their discomfort, and they’ll seize on “doing an evil thing for the greater good”: even if the thing isn’t especially evil, trying to achieve a greater good at all seems like a near occasion of sin.
The normal popular politics actions are mostly about manipulating a narrative, promoting an ideology or suppressing dissent2. This all feels so normal to people (who might themselves want to promote an ideology, or who are at least used to other people wanting to) that it isn’t scary, and it doesn’t feel like the dreaded “doing an evil thing for the greater good”. It’s not especially moral, or especially calculated, so people let it pass - even though, if you forced them to consider the question explicitly, they would say that saving lives is a more compelling goal than manipulating a narrative is.
-
There’s a sense in which all policies sacrifice something for the greater good. Being pro-gun-control sacrifices the right to bear arms for the greater good of fewer deaths; being anti-gun-control sacrifices some lives to protect the right to bear arms. Being pro-life sacrifices womens’ health and convenience for the greater good of saving babies; being pro-choice sacrifices embryos for womens’ health and convenience. I don’t find this sense very compelling because nobody previously decided that (eg) the right to bear arms was sacred but wanting fewer deaths wasn’t; it’s just trading off two equally-trade-off-able goods. I tried to construct this list out of cases where one side of the tradeoff is clearly a sacred rule.
-
Another explanation I considered was that normal people are okay with governments making these greater-good tradeoffs, but not ordinary individuals. But after more thought, I don’t think that works. Many normal political people are okay with ordinary individuals unilaterally choosing to shame people on the wrong side of an issue. And if a government were to institute eugenics in a calculated way, they would consider that the bad kind of “greater good” reasoning.