Altruism And Vitalism As Fellow Travelers
Some commenters on the recent post accused me of misunderstanding the Nietzschean objection to altruism.
We hate altruism, they said, not because we’re “bad and cruel”, but because we instead support vitalism. Vitalism is a moral system that maximizes life, glory and strength, instead of maximizing happiness. Altruism is bad because it throws resources into helping sick (maybe even dysgenic) people, thus sapping our life, glory, and strength.
In a blog post (linked in the original post, discussed at length in the comments), Walt Bismarck compares the ultimate fate of altruism to WALL-E: a world where morbidly obese humans are kept in a hedonistic haze by robot servitors (although the more typical example I hear is tiling the universe with rats on heroin, which maximizes a certain definition of pleasure). In contrast, vitalism imagines a universe alive with dynamism, heroism, and great accomplishments.
My response: in most normal cases, altruism and vitalism suggest the same solutions. The two diverge from each other in Extremistan, but in Extremistan each one also diverges from itself, shattering into innumerable incoherent and horrible outcomes. So we should mostly concentrate on the normal cases where they converge. I’m suspicious of anyone who gets too interested in the extreme divergent cases, because I think many of these people are actively looking for trouble (eg excuses for cruelty) and should stop.
Going through each sentence of this summary in order:
In Most Normal Cases, Altruism And Vitalism Suggest The Same Solutions
Define altruism as “try to increase happiness and decrease suffering across a society” and vitalism as “try to increase strength and decrease weakness across a society”, where “strength” is defined as ability to achieve goals (and, in a tie-breaker, ability to win wars).
Most things that do one also do the other:
-
Curing disease. A healthy society is both happier and stronger than a sick society.
-
Increasing wealth. A rich society is both happier and stronger than a poor society.
-
Advancing technology. An advanced society has more ways to make its people happy and to win conflicts than a primitive society.
-
Saving lives. This is altruistic by definition. But living people are generally better at achieving their goals than dead people, and a society with more living people is stronger than one where lots of people have died (eg they can field bigger armies). There are some exceptions - it’s not altruistic if the people are suicidal, and it’s not vitalist if the people are useless parasites - but in most cases the goals converge.
-
Winning wars. This is vitalist by definition. But if you think your country is in the right, its victory will make the world better and increase utility.
In the general case, people prefer to be powerful/strong/winning, so by improving their strength you make them happier, and by satisfying their preferences you make them more powerful.
The TwoDiverge From Each Other In Extremistan, But In Extremistan They Also Diverge From Themselves
If we take a hyperspecific definition of altruism (“creating as much happiness as possible”), ignore all second-order effects, and extend it to infinity, it could lead to morbidly obese people on heroin drips.
(though if we already have everyone connected to an IV drip, realistically we can just add some Ozempic. There, see, I’ve already made this 20% less dystopian.)
What’s the equivalent for vitalism? Suppose we took a hyperspecific definition of vitalism (“building as many tanks as possible”). Soon we’re bulldozing the cathedrals to build more tank factories, breaking up happy families to send kids to the iron mines, and ripping scientists from their blackboards to work as tank gunners.
(if your objection is that vitalism is more about overcoming challenges than about military strength per se, you can replace this with those same WALL-E robots, only now you’re on a testosterone drip and they’re whipping you twenty hours a day to force you to try to lift a weight 0.01 kg heavier than your previous personal best.)
(if your objection is that vitalism is more about aesthetics/beauty than strength, you can replace this with a robot churning out one billion extremely beautiful marble statues per second somewhere in the Andromeda Galaxy, with humankind long since extinct.)
Aren’t these fake strawman versions of vitalism which nobody actually believes? Yes, kind of. But anything at this level of specificity will also be fake. What vitalists mean is something like “I want this vision of a flourishing society that I’m imagining right now, and it’s got lots of strength and heroism and overcoming challenges and stuff”. If you try to specify it as a function that can be easily maximized, then extend that function to infinity, it sounds like a joke.
But the same is true of altruism. When people talk about altruism, they imagine some flourishing society free from sickness and poverty and suffering. “Minimize suffering” is their description of one vector that gets you there, but minimizing suffering directly and monomaniacally to infinity will take you someplace weird. That’s not a problem with altruism, it’s a problem with infinity.
Both altruism and vitalism, if you let their proponents describe them in a hand-wavey way, sound pretty good. Both altruism and vitalism, if you demand a strict objective definition and then extend it to infinity, become crazy and undesirable. If you let the vitalists hand-wave and stay finite, but demand strict objectivity and infinitizability from the altruists, then vitalism will look better than altruism. So what? So don’t do that.
But also, both of these scenarios ignore second-order effects. For example - aren’t happy families good, even to monomaniacal tank-builders, because they raise well-adjusted children who can build the next generation of tanks? Aren’t scientists good, because even the most theoretical among them may one day hit on an advance which could prove useful for tank construction? Aren’t cathedrals good insofar as they fill us with awe of God, who commands us to build more tanks?
All of this sounds obvious when we talk about tank-building. But the same is true of altruism. Don’t we need well-adjusted children to staff the next generation of hospitals and charities? Don’t we need scientists to discover the next generation of antimalaria drugs and safe AIs? Aren’t cathedrals good insofar as they fill us with awe of God, who commands us to love thy neighbor as thyself?
Sure, you can imagine the world in which we’ve maxed out the tech tree and invented robots that can act without inspiration, and all that’s left is to connect humans to the heroin drips. But in the same world, all that’s left is to send the robots to staff the tank (or marble statue) factories. In any realistic world, where there’s still new technology to discover and new generations to raise and inspire, both of these goals create a recognizable society with most good things.
This is kind of a patch: in a perfect world, we want a moral philosophy that survives maxing out the tech tree and having infinite robots, so that we can be prepared for the post-progress infinite wealth far future (ie 2045). No existing philosophy is up to this task, but vitalism doesn’t solve this any better than altruism.
(this picture is slightly complicated by the fact that some altruists will actually endorse the heroin drip world. I’m not sure where I fall here - see here for more. I would prefer eternal ecstatic bliss to some world where we all have to fight a bunch of meaningless wars against each other just so we can check off “be strong and have wars” on a meaningless Vitalism To-Do Checklist. But I’m holding out for something better than either.)
I’m Suspicious Of People Who Talk Too Much About The Divergence In Normal Cases
I see two common arguments for why altruism and vitalism are divergent even in normal cases.
First, the cuckoo clock argument. The famous version, from Orson Welles’ The Third Man , goes:
In Italy for 30 years under the Borgias they had warfare, terror, murder, and bloodshed, but they produced Michelangelo, Leonardo da Vinci, and the Renaissance. In Switzerland they had brotherly love - they had 500 years of democracy and peace, and what did that produce? The cuckoo clock.
Isn’t there some sense where conflict (which is bloody and full of suffering) produces progress and strength? And doesn’t that mean that altruists should oppose conflict, but vitalists should promote it?
I’m skeptical of this argument. America’s been at peace since World War II (foreign adventures like Vietnam haven’t substantially changed our national experience) and produced the computing revolution, the Internet, AI, the moon landing, the Human Genome Project, antiretrovirals, the microwave, the laser, the smartphone, and the reusable rocket. During that time, Iraq has had approximately eight major wars and didn’t even get a cuckoo clock out of it.
Is this an unfair comparison, since America has 8x Iraq’s population? No more so than Welles’ is (Italy has 8x the population of Switzerland). But beyond the specifics, I think it’s useful for shocking us into Near Mode. War isn’t actually that great for science, art, or the economy. I’m not expecting Russia or Ukraine to leapfrog the rest of the world any time soon. I’m expecting them to fall further and further behind until the war ends, at which point maybe they’ll get a chance to catch up.
This isn’t to say there’s no advantage of conflict. Capitalism is a kind of conflict and was responsible for many (though not all) of the inventions mentioned above (but do remember that Bell Labs was famously productive precisely because it was a monopoly). The Cold War also inspired both the US and Russia to do some good work (as well as inspiring both to waste trillions of dollars on useless one-upsmanship and arms races). There’s some evidence that the most heavily-bombed areas of Britain and Japan are richer today (because they were able to build back from first principles instead of being limited by existing infrastructure). But this is a pretty far cry from saying that war is generally good.
I think both altruists and vitalists have a shared interest in figuring out the structures (capitalism? monopoly? friendly rivalry?) that maximize progress without devolving into anyone actually getting nuked.
The second divergence argument I hear is “suffering builds character” or “suffering is responsible for the spark of greatness”.
An obvious counterexample to this is all the extremely successful people from privileged upbringings. Bill Gates, Steve Jobs, and Mark Zuckerberg all had great childhoods. So did Caesar and Napoleon. So did Einstein and von Neumann. Meanwhile, there are millions of poor people and war victims who have lived lives of constant horrible trauma without much benefit. If success and creativity were proportional to suffering, the West would have to ban refugees from the Gaza Strip, lest they take all the spots in the best colleges and form an elite billionaire overclass.
Here I’ll also refer back to my old post on Jo Cameron, a Scottish woman with a rare genetic mutation that makes it impossible for her to suffer. She cannot feel pain, anxiety, fear, or any other negative emotion. As far as anyone can tell, she is completely normal. She is a successful wife, mother, and teacher, generally considered well-liked and excellent at her job. She may not have achieved greatness, but I once talked to someone impressive who you’ve probably heard of (I don’t have permission to share their name) who seems to have a lesser version of the same condition.
This isn’t to say that there isn’t some level of spoiling that can mess someone up. I just think it’s more than Mark Zuckerberg (who was raised by two loving parents in an upper class suburb and went to prep school) got. If an altruist’s goal is to give everyone the equivalent of a childhood raised by loving parents in a happy suburb with great schools, I don’t think a vitalist can complain.
I think altruists and vitalists have a shared interest in figuring out what kind of experiences are best at making people more resilient and ambitious, but I don’t think the answer will look like “we need to dial up the pain and suffering in some scattershot global sense”.
(Further, More Specific Examples)
Other people make more specific claims about the divergence between altruism and vitalism. For example, effective altruists often spend money curing/preventing malaria in Third World countries. Isn’t this “dysgenic”? Doesn’t it waste money on weak people who can’t take care of themselves?
The average person who dies from malaria is a 3-4 year old child. Children don’t die of illnesses because they’re “dysgenic” or “weak”. They die because their immune systems haven’t developed yet.
Isn’t this still just adding extra bodies to Third World countries that already can’t take care of themselves, thus making the world worse off rather than better off? Not really. The average Kenyan makes $2000 per year. If you spend $4000 to save the life of one Kenyan, and they work for thirty years, you’re contributing $56,000 to world GDP. This is probably more than you could contribute to world GDP by trying to save First Worlders (who make more money, but are much harder to save the lives of).
Doesn’t Kenya fail to produce anything useful? First of all, if that were true, they wouldn’t have a GDP of $100 billion, or export $7 billion of goods. Second, the potential of Kenya is probably underutilized because it’s underdeveloped, and part of the process of making it less underdeveloped is making its people healthier. Malaria infections probably cost a couple of IQ points, decrease impulse control, and prevent economic growth. Treating malaria probably isn’t the most effective way to speed economic growth, but it’s not the most effective way to reduce suffering either; EAs talk about it a lot because it’s one of the few interventions which has been well-studied and can absorb almost limitless new funding.
Finally, I think a lot of people think of this in terms of “Africans’ lives are worse less than nothing and I want to get rid of them”. Even if this is where you’re coming from, you’re not getting rid of one billion people, sorry. Your best option is to make Africa less of a mess, so that it can take care of itself and its people don’t try to immigrate elsewhere. I think trying to cure malaria is one way of making Africa less of a mess.
You can still object that this isn’t maximally effective vitalism. But it’s not maximally effective altruism either (just maximally effective among interventions that are proven and can be done at scale). Are there better vitalist interventions that are proven and at scale? I don’t know, because vitalists want to reserve the right to hand-wave their own philosophy while criticizing the practically-operationalized philosophy of others.
If there were such interventions, I think vitalists would find that they wouldn’t be able to bring themselves devote more than ~10% of their time/energy/money to them, in the same way most effective altruists don’t devote more than 10% of their time/energy/money to altruism. If the analogy held, I think this would teach them humility. Most people aren’t very effective vitalists! In that case, people who are doing a somewhat world-strengthening thing (curing diseases) are still decent allies, especially compared to the vast majority of people who aren’t doing any world-strengthening at all.
At this point, I think a vitalism/altruism divergence would look kind of like the progress studies/EA divergence does now - two groups working on similar projects with different emphases, who form natural coalition partners on most topics.
…Because I Think These People Are Actively Looking For Trouble
If I wanted to strengthen humanity as much as possible, I’d probably work on economic development, curing diseases, or technological progress. I might have slightly different priorities from the effective altruists working on these same causes, but I’d consider them 99%-allies.
Vitalist bloggers mostly don’t seem to think this way. They spend most of the energy criticizing altruists, and never really get around to practicing vitalism at all. When they do make specific suggestions, it’s always things like “Maybe we should have more war, because war strengthens society!” even though the case for war strengthening society is much weaker than the case for (eg) economic development strengthening society.
I think it’s all signaling. People who want to validate an identity as kind and compassionate become altruists. People who want to validate an identity as tough and masculine and hard-headed become vitalists. This is why people bother hitching their star to any philosophy instead of just making money or playing video games or whatever.
When altruists do this wrong, they end up supporting eg clemency for serial murderers, a category for whom it’s especially hard to feel compassion (and therefore, if they do show compassion, it proves they’re so amazingly kind and compassionate). EAs try to restrain these kinds of signaling games by objective calculations of the right thing to do, although some would say these produce new signaling games (eg shrimp welfare).
The mirror image on the vitalist side is when they end up supporting war and suffering, concepts which are especially hard to endorse (and which therefore prove they’re amazingly tough and masculine and hard-headed for daring to endorse them anyway). War and suffering are so impractical that their support can only come from really tough hard-headed masculinity, not from normal human common sense and decency.
But the causes that work best for signaling aren’t necessarily the causes that work best for actually getting the thing that you want, whether that’s a happy society or a strong one.
…And Should Stop
I said above that [signaling and identity defense] is “why people bother hitching their star to any philosophy instead of just making money or playing video games or whatever.” Isn’t this a cynical viewpoint? People were talking about this in the comments a lot: is it impossible to be to genuinely be altruistic/vitalistic/whatever? If so, isn’t it more honest to just be selfish instead of signaling all the time?
My answer: haha, as if you could manage genuine selfishness. I have a bunch of complicated tax form things I need to do that would get me a few hundred extra dollars per month; I’ve delayed them for over a year now. You’d think that if I were genuinely selfish I’d take the free money. Humans genuinely do what some sort of predictive algorithm figures will send the most dopamine to a certain part of their mesolimbic system; everything else is some kind of complicated willpower-exertion game where, if you contort yourself in exactly the right direction, the reward of feeling responsible and virtuous sends enough dopamine to your mesolimbic system to be worth it. Altruism is a willpower-exertion-contortion game like this, but so are selfishness and everything else.
Katja Grace has a great article called In Praise Of Pretending To Really Try. Maybe at some level all of our values are pretense - you’re just trying to convince other people (or yourself!) that you’re a good person. Maybe this is true of the selfish values (responsibility, diligence, etc) as much as the altruistic ones (at least this is how it works for me - I don’t feel deep internal motivation to do my taxes, I do them some reasonable amount and then think “Okay, I’ve done enough taxes for today that I don’t feel like a loser, I’ll finish them up tomorrow.”) Pretense is bad insofar as it replaces work optimized for effectiveness (really finishing your taxes, really helping others) with work optimized for signaling (staring at your taxes for an hour then crediting yourself for an hour of work, slacktivism). The solution isn’t to stop pretending and “do it for real”, because that’s not an action available to humans. The solution is to up your pretending game. Don’t feel good about how responsible you are for staring at your taxes without doing them - only feel good if you’ve done good work. Don’t feel good about having made vague gestures in favor of altruism - only feel good in proportion to the people you’ve actually helped. You can do this partly through your own conscience, and partly through joining a community that accords status on this basis.
Once you’re self-motivated to display a virtue in order to fulfill an identity you’ve voluntarily built around it, and also self-motivated to critique your performance of the virtue in order to make it more effective rather than just signaling, then the difference between pretending at the virtue and actually having the virtue shrinks to zero. You can honestly say you have the virtue, even if it’s built atop a tower bottoming out in mesolimbic dopamine or whatever.
So my challenge to the vitalists is to pretend to really try. This challenge is self-enforcing; the more people (including the audience who they’re signaling to) are thinking about it, the more natural it becomes. I think once they do that, most of the local difference between vitalism and altruism will disappear. Then we can leave the terminal differences for after the Singularity, just like every other impossible ethical paradox.
[EDIT: Halfway through this post I discovered Richard Chappell’s The Nietzschean Challenge To Effective Altruism, which makes some similar points]