(Original post here)

1: Petey writes:

When I think of happiness 0.01, I don’t think of someone on the edge of suicide. I shudder at the thought of living the sorts of lives the vast majority of people have lived historically, yet almost all of them have wanted and tried to prolong their lives. Given how evolution shaped us, it makes sense that we are wired to care about our survival and hope for things to be better, even under great duress. So a suicidal person would have a happiness level well under 0, probably for an extended period of time.

If you think of a person with 0.01 happiness as someone whose life is pretty decent by our standards, the repugnant conclusion doesn’t seem so repugnant. If you take a page from the negative utilitarians’ book (without subscribing fully to them), you can weight the negatives of pain higher than the positives of pleasure, and say that neutral needs many times more pleasure than pain because pain is more bad than pleasure is good.

Another way to put it is that a life of 0.01 happiness is a life you must actually decide you’d want to live, in addition to your own life, if you had the choice to. If your intuition tells you that you wouldn’t want to live it, then its value is not truly >0, and you must shift the scale. Then, once your intuition tells you that this is a life you’d marginally prefer to get to experience yourself, then the repugnant conclusion no longer seems repugnant.

This is a good point, but two responses.

First, for me the conclusion’s repugnance doesn’t hinge on the lives of the people involved being especially bad. It hinges on people having to be sadder and poorer than the alternative, their standard of living forever capped, just in order to tile the world with as many warm bodies as possible. I genuinely don’t care how big the population is. I don’t think you can do harm to potential people by not causing them to come into existence. Hurting actual people in order to please potential people seems plenty repugnant to me regardless of the exact level of the injury.

Second, MacAskill actually cites some research about where we should put the zero point. Weirdly, it’s not in the section about the repugnant conclusion, it’s in a separate section about whether we should ascribe the future positive value.

In one study, researchers ask people to rate their lives on a 1-10 scale; in another, they ask people to rate where they think the neutral point where being alive no longer has positive value is. These aren’t the same people, so we can’t take this too seriously, but if we combine the two studies than about 5-10% of people’s lives are below neutral.

Another study contacted people at random times during their day and asked them whether they would like to skip over their current activity (eg sleepwalk through work, then “wake up” once they got home). Then they compared these in various ways to see whether people would want to skip their entire lives, and about 12% of people did. I don’t entirely understand this study and I’m only repeating it for the nominative determinism value - one of the authors was named Dr. Killingsworth.

There are also a few studies that just ask this question directly; apparently 16% of Americans say their lives contain more suffering than happiness, 44% say even, and 40% say more happiness than suffering; nine percent wish they were never born. A replication in India found similar numbers.

Based on all of this, I think if we trust this methodology about 10% of people live net negative lives today, which means the neutral point where the Repugnant Conclusion would force us to is at about the tenth percentile of the population. This doesn’t quite make sense, because you would think the tenth percentile of America and the tenth percentile of India are very different; there could be positional effects going on here, or it could be that India has some other advantages counterbalancing its poverty (better at family/community/religion?) and so tenth-percentile Indians and Americans are about equally happy.

Happiness isn’t exactly the same as income, but if we assume they sort of correlate, it’s worth pointing out that someone in the tenth percent of the US income distribution makes about $15,000. So maybe we could estimate that the average person in the Repugnant Conclusion would be like an American who makes $15,000.

Another way of thinking about this: about 8% of Americans are depressed, so the tenth-percentile American is just barely about the threshold for a depression diagnosis; we might expect the average Repugnant Conclusion resident to be in a similar state.


2: Jack Johnson writes:

I always used to make arguments against the repugnant conclusion by saying step C (equalising happiness) was smuggling in communism, or the abolition of Art and Science, etc.

I still think it shows some weird unconscious modern axioms that the step “now equalise everything between people” is seen as uncontroversial and most proofs spend little time on it.

I think this way of thinking about things is understandable but subtly wrong, and that the “now equalize happiness” step in Repugnant Conclusion is more defensible than communism or other forms of real-life equalizing.

In the Repugnant Conclusion, we’re not creating a world, then redistributing resources equally. We’re asking which of two worlds to create. It’s only coincidence that we were thinking of the unequal one first.

Imagine we thought about them in the opposite order. Start with World P, with 10 billion people, all happiness level 95. Would you like to switch to World Q, which has 5 billion people of happiness level 80 plus 5 billion of happiness level 100? If so, why? You’re just choosing half the people at random, making their lives a little better, and then making the lives of the other half a lot worse, while on average leaving everyone worse off.

MacAskill calls the necessary assumption “non-anti-egalitarianism”, ie you don’t think equality is so bad in and of itself that you would be willing to make the world worse off on average just to avoid equality. While you can always come up with justifications for this (maybe the lack of equality creates something to strive for and gives life meaning, or whatever) I don’t think most people would naturally support this form of anti-egalitarianism if they didn’t know they needed it to “win” the thought experiment.

Communism wants to take stuff away from people who have it for some specific reason (maybe because they earned it), and (according to its opponents), makes people on average worse off. In the thought experiment, nothing is being taken away (because the “losers” never non-counterfactually had it), there was never any reason for half the population to have more than the other half, and it makes people on average better off. So we can’t use our anti-communism intuitions to reject the equalizing step of the Repugnant Conclusion.


3: Regarding MacAskill’s thought experiment intending to show that creating hapy people is net good, Blacktrance writes:

Conditional on the child’s existence, it’s better for them to be healthy than neutral, but you can’t condition on that if you’re trying to decide whether to create them.

If our options are “sick child”, “neutral child”, and “do nothing”, it’s reasonable to say that creating the neutral child and doing nothing are morally equal for the purposes of this comparison; but if we also have the option “healthy child”, then in that comparison we might treat doing nothing as equal to creating the healthy child. That might sound inconsistent, but the actual rule here is that doing nothing is equal to the best positive-or-neutral child creation option (whatever that might be), and better than any negative one.

For an example of other choices that work kind of like this - imagine you have two options: play Civilization and lose, or go to a moderately interesting museum. It’s hard to say that one of these options is better than the other, so you might as well treat them as equal. But now suppose that you also have the option of playing Civ and winning. That’s presumably more fun than losing, but it’s still not clearly better than the museum, so now “play Civ and win” and “museum” are equal, while “play Civ and lose” is eliminated as an inferior choice.

This is a fascinating analogy, but I’m not sure it’s true. If playing Civ and losing was genuinely exactly equal in utility to going to the museum, then it might be true that playing Civ and winning dominates it. I agree with Blacktrance that this dones’t feel true, but I think this is just because I’m bad at estimating utilities and they’re so close together that they don’t register as different to me.


4: MartinW writes:

Do people who accept the Repugnant Conclusion, also believe in a concrete moral obligation for individuals to strive to have as many children as possible?

Some religions do, but I’d be surprised to find a modern atheist philosopher among them. But if you accept the premise that preventing the existence of a future person is as bad as killing an existing person.

I should stress that even the people who accept the repugnant conclusion don’t believe that “preventing the existence of a future person is as bad as killing an existing person”; in many years of talking to weird utilitarians, I have never heard someone assert this.

More generally, I think that talk of “moral obligation” is misleading here. If you accept the repugnant conclusion, creating new people is good. Other things that are good include donating money to charity, being vegetarian, spending time with elderly people, donating your kidney, and living a zero-carbon lifestyle. Basically nobody does all these things, and most people have an attitude of “it is admirable to do this stuff but you don’t have to”. Anyone who did all this stuff would be very strange and probably get a Larissa MacFarquahar profile about them. If having children is good, it would be another thing in this category.

In fact, it’s worth pointing out how incredibly unlikely it is that your decision to have children has an expected utility of exactly zero. Either you believe creating happy people is good in and of itself, or you believe in the underpopulation crisis, or you believe in the overpopulation crisis, or maybe your kid will become a doctor and save lives, or maybe your kid will become a criminal and murder people. When you add up the probabilities of all of that, it would be quite surprising if it equalled zero. But that means having a child is either mildly-positive-utility or mildly-negative-utility. Unless you want to ban people from having kids / require them to do so, you had better get on board with the program of “some things can have nonzero utility but also be optional”.

Also, a few commenters point out that even if you did have an obligation to have children, you would probably have an even stronger obligation to spend that money saving other people’s children (eg donating it to orphanages, etc).


5: Rana Dexsin writes:

Before reading the rest of this, I want to register this bit:

> Is it morally good to add five billion more people with slightly less constant excruciating suffering (happiness -90) to hell? No, this is obviously bad

My intuition straightforwardly disagreed with this on first read! It is a good thing to add five billion more people with slightly less constant excruciating suffering to hell, conditional on hell being the universe you start with. It is not a good thing to add them to non-hell, for instance such as by adding them to the world we currently live in.

You are the first person I’ve ever met or heard of who genuinely has average utilitarian philosophical intuitions. I feel like you should be in a museum somewhere. Also, I hope no one ever puts you in charge of Hell.


6: Magic9Mushroom writes:

> If I can’t find any system of axioms that doesn’t do something terrible when extended to infinity,

The philosophers have gotten ahead of you on that one. Surprised you haven’t already read it, actually.

https://www.iffs.se/media/2264/an-impossibility-theorem-for-welfarist-axiologies-in-ep-2000.pdf

It’s a proof that any consistent system of utilitarianism must either accept the Repugnant Conclusion (“a larger population with very low but positive welfare is better than a small population with very high welfare, for sufficient values of ‘larger’”), the Sadistic Conclusion (“it is better, for high-average-welfare populations, to add a small number of people with negative welfare than a larger number with low-but-positive welfare, for sufficient values of ‘larger’”), the Anti-Egalitarian Conclusion (“for any population of some number of people and equal utility among all of those people, there is a population with lower average utility distributed unevenly that is better”), or the Oppression Olympics (“all improvement of people’s lives is of zero moral value unless it is improvement of the worst life in existence”).

This proof probably has something to do with why those 29 philosophers said the Repugnant Conclusion shouldn’t be grounds to disqualify a moral accounting - it is known that no coherent system of utilitarian ethics avoids all unintuitive results, and the RC is one of the more palatable candidates (this is where the “it’s not actually as bad as it looks, because by definition low positive welfare is still a life actually worth living, and also in reality people of 0.001 welfare eat more than 1/1000 as much as people of 1 welfare so the result of applying RC-logic in the real world isn’t infinitesimal individual welfare” arguments come in).

(Also, the most obvious eye-pecking of “making kids that are below average is wrong” is “if everyone follows this, the human race goes extinct, as for any non-empty population of real people there will be someone below average who shouldn’t have been born”. You also get the Sadistic Conclusion, because you assigned a non-infinitesimal negative value to creating people with positive welfare.)

Thanks, I had forgotten about that.

I think I am going to go with “morality prohibits bringing below-zero-happiness people into existence, and says nothing at all about bringing new above-zero-happiness people into existence, we’ll make decisions about those based on how we’re feeling that day and how likely it is to lead to some terrible result down the line.”


7: hammerspacetime writes:

Have we considered that there is a middle ground between “future people matter as much as current people” and “future people don’t matter at all”? If you want numbers you can use a function that discounts the value the further in the future it is, just like we do for money or simulations, to account for uncertainty.

I imagine people would argue over what the right discount function should be, but this seems better than the alternative. It also lets us factor in the extent to which we are in a better position to find solutions for our near term problems than for far-future problems.

There’s a pragmatic discount rate, where we discount future actions based on our uncertainty about whether we can do them at all. I am near-certain that if I give a beggar $100 today, he will get the $100. But if I leave $100 in a bank with a will saying that it should be given to a poor person in the year 5000 AD, someone could steal it, the bank could go out of business, the bank could lose my will, humankind could go extinct, etc. If there’s only a 1% chance that money saved in this way will really reach its target, then we have an implicit 99% discount rate per 3000 years.

There’s been some debate about whether we should additionally have an explicit discount rate, where we count future people as genuinely less important than us. Most people come out against, because why should we? It doesn’t intuitively seem true that the suffering of future people matters less than the suffering of people today.

Eliezer Yudkowsky and Robin Hanson had an interesting debate about this in 2008; you can read Eliezer here and Robin here. I think Robin later admitted that his view meant people in the past were much more valuable than people today, so much so that we should let an entire continent worth of present people die in order to prevent a caveman from stubbing his toe, and that he sort of kind of endorses this conclusion; see here.


8: Hari Seldon writes:

The issue I always have with ultralarge-potential-future utilitarian arguments is that the Carter Catastrophe argument can be made the same way from the same premises, and that that argument says that the probability of this ultralarge future is proportionately ultrasmall.

Imagine two black boxes (and this will sound very familiar to anyone who has read Manifold: Time). Put one red marble in both Box A and Box B. Then, put nine black marbles in Box A and nine hundred ninety-nine black marbles in Box B. Then, shuffle the boxes around so that you don’t know which is which, pick a box, and start drawing out marbles at random. And then suppose that the third marble you get is the red marble, after two black ones.

If you were asked, with that information and nothing else, whether the box in front of you was Box A or Box B, you’d probably say ‘Box A’. Sure, it’s possible to pull the red marble out from 999 black ones after just three tries. It could happen. But it’s a lot less likely than pulling it out from a box with just 9 black marbles.

The biggest projected future mentioned in this book is the one where humanity colonizes the entire Virgo Cluster, and has a total population of 100 nonillion over the course of its entire history. By comparison, roughly 100 billion human beings have ever lived. If the Virgo Cluster future is in fact our actual future, then only 1 thousand billion billionth of all the humans across history have been born yet. But, the odds of me being in the first thousand billion billionth of humanity are somewhere on the order of a thousand billion billion to one against. The larger the proposed future, the earlier in its history we’d have to be, and the less likely we would declare that a priori.

If every human who ever lived or ever will live said “I am not in the first 0.01% of humans to be born”, 99.99% of them would be right. If we’re going by Bayesian reasoning, that’s an awfully strong prior to overcome.

I also thought about that when reading this!

My main concern is that it’s hard to come up with a model where the future doesn’t have more beings in it than the present. The universe is still relatively young. Suppose humankind wipes itself out tomorrow; surely most intelligent life in the universe will be aliens who live after this point?

But I think something like the Grabby Aliens model could explain this: intelligent species arise relatively young in the universe’s history, get replaced by non-conscious AIs, and the AIs spread across the universe until there are no more uncolonized stars to spawn biological life.

This is really awkward because it suggests AIs can’t be conscious - not just that one particular AI design isn’t conscious, but that no alien race will design a conscious AI. An alternative possibility is that AIs naturally remain single hiveminds, so that most individuals are biological lifeforms even if AI eventually dominates the galaxy. But how could an AI remain a single hivemind when spread across distances so vast that the lightspeed limit hinders communication?

I’m not sure how to resolve this except that maybe some idiot destroys the universe in the next few hundred million years.


9: David Chapman and many other people took me as attacking philosophy:

Twitter avatar for @MeaningnessDavid Chapman @Meaningness

.@slatestarcodex vs. philosophy [philosophy is bad. don’t do it. gently ridicule anyone who takes it seriously] astralcodexten.substack.com/p/book-review-…

Image

Image

[12:48 PM ∙ Aug 23, 2022


251Likes21Retweets](https://twitter.com/Meaningness/status/1562059228977590277)

I disagree with this. I joked about it defeating the point of philosophy, but I think that realistically I was doing philosophy just like everyone else. In a sense all attacks on philosophy are doing philosophy, but I feel like I was doing philosophy even more than the bare minimum that you have to in order to have an opinion at all.

I’m not sure how moral realist vs. anti-realist I am. The best I can do is say I’m some kind of intuitionist. I have some moral intuitions. Maybe some of them are contradictory. Maybe I will abandon some of them when I think about them more clearly. When we do moral philosophy, we’re examining our intuitions to see which ones survive vs. dissolve under logical argument.

The repugnant conclusion tries to collide two intuitions: first, that the series of steps that gets you there are all valid, and second that the conclusion is bad. If you feel the first intuition very strongly and the second one weakly, then you “have discovered” that the repugnant conclusion is actually okay and you really should be creating lots of mildly happy people. I have the opposite intuitions: I’m less sure about the series of steps than I am that I’m definitely unhappy with the conclusion, and I will reject whatever I need to reject to avoid ending up there.

In fact, I’m not sure what to reject. Most of the simple solutions (eg switch to average utilitarianism) end up somewhere even worse. On the other hand, I know that it’s not impossible to come up with something that satisfies my intuitions, because “just stay at World A” (the 5 billion very happy people) satisfies them just fine.

So I think of this as a question of dividing up a surplus. World A is very nice. It seems possible that we can do better than World A. How much better? I’m not sure, because some things which superficially appear better turn out to be worse. Someone who is smarter than I am might be able to come up with a proof that the best we can do according to my intuitions is X amount better - in which case I will acknowledge they are a great philosopher.

Nobody knows exactly what their moral system is - even the very serious utilitarians who accept the Repugnant Conclusion can’t explain their moral system so precisely that a computer could calculate it. We all have speculative guesses about which parts of our intuition we can describe by clear rules, and which ones have to stay vague and “I know it when I see it”. I prefer to leave this part of population ethics vague until someone can find rules that don’t violate my intuitions so blatantly. This isn’t “anti-philosophy”, it’s doing philosophy the same as we do it everywhere else.



10: Siberian Fox writes:

Twitter avatar for @SilverVVulpesSiberian fox @SilverVVulpes

excellent meme from astralcodexten.substack.com/p/book-review-… but I still disagree. I’m open to being wrong because it means I get my eyes pecked by seagulls, but I do believe a galactic civilization with trillions of barely worth living meh lives > a bubble utopia of 5000 people around wasteland

astralcodexten.substack.comBook Review: What We Owe The Future…6:57 PM ∙ Aug 24, 2022

I’m also sympathetic to the galactic civilization, but only because it’s glorious. This is different from “it has a lot of people experiencing mild contentment”.

Isaac Asimov wrote some books about the Spacers, far-future humans who live the lives of old-timey aristocrats with thousands of robot servants each. Suppose we imagine a civilization of super-Spacers with only one human per thousand star systems - even though all of these star systems are inhabited by robots who have built beautiful monuments and are doing good scientific and creative work (which the humans know about and appreciate). Overall there are only five thousand humans in the galaxy, but galactic civilization is super-impressive and getting better every day. Sometimes some people die and others are born, but it’s always around five thousand.

Or you can have the city of Jonesboro, Arkanasas (population: 80,000) exactly as it currently exists, preserved in some kind of force field. For some reason the economy doesn’t collapse even though it has no trade partners; maybe if you send trucks full of goods into the force field, it sends back trucks full of other goods. Sometimes some people die and others are born, but it never changes much or gets better.

I find that the same part of me that prefers the galactic supercivilization in Siberian Fox’s example also prefers the galactic supercivilization in my example, even though it’s hard to justify with total utilitarianism (there are fewer than 10% as many people; even though their lives are probably much better, I don’t think the intuition depends on them being more than 10x better).


11: Alexander Berger writes:

Twitter avatar for @albrgrAlexander Berger @albrgrInteresting/surprising to me that the Repugnant Conclusion is where @slatestarcodex gets off the crazy train: astralcodexten.substack.comBook Review: What We Owe The Future…[10:32 PM ∙ Aug 23, 2022


50Likes2Retweets](https://twitter.com/albrgr/status/1562206120302690306)

You can probably predict my response here - I don’t think I’m doing anything that could be described as “getting off the crazy train”. Like if someone is thinking “Scott believes in so many weird things, like AI risk and deregulating the FDA and so on, it’s weird that this is where he’s choosing to stop believing weird things”, I think you’re drawing the weird-thing category in the wrong place.

I believe in AI risk because I think it is going to happen. If I’m a biased person, I can choose to bias myself not to believe in it, but if I try to be unbiased, the best I can do is just follow the evidence wherever it leads, even if it goes somewhere crazy.

But in the end I am kind of a moral nonrealist who is playing at moral realism because it seems to help my intuitions be more coherent. If I ever discovered that my moral system requires me to torture as many people as possible, I would back off, realize something was wrong, and decide not to play the moral realism game in that particular way. This is what’s happening with the repugnant conclusion.

Maybe Berger was including my belief in eg animal welfare as a crazy train stop. I do think this is different. If my moral code is “suffering is wrong”, and I learn that animals can suffer, that’s a real fact about the universe that I can’t deny without potentially violating my moral code. If someone says “I think we should treat potential people exactly the same as real people”, and I notice my moral intuitions don’t care about this, then you can’t make me.

On questions of truth, or questions of how to genuinely help promote happiness and avoid suffering, I will follow the crazy train to the ends of the earth. But if it’s some weird spur line to “how about we make everyone worse off for no reason?” I don’t think my epistemic or moral commitments require me to follow it there.


12: Long Disc writes:

The hockey stick chart with world economic growth does not prove that we live in an exceptional time. Indeed, if you take a chart of a simple exponential function y=exp(A*x) between 0 and T, then for any T you can find a value of A such that the chart looks just like that. An yet there is nothing special about that or another value of T.

Several people had this concern but I think the chart isn’t exponential, it’s hyperbolic. An exponential chart would have the same growth rate at all times, but I think growth rate in ancient times was more like 0.1% per year, compared to more like 2% per year today.


13: David Manheim writes:

I think there’s a really simple argument for pushing longtermism . . . the default behavior of humanity is so very short-term that pushing in the direction of considering long-term issues is critical.

For example, AI risk. As I’ve argued before, many AI-risk skeptics have the view that we’re decades away from AGI, so we don’t need to worry, whereas many AI-safety researchers have the view that we might have as little as a few decades until AGI. Is 30 years “long-term”? Well, in the current view of countries, companies, and most people, it’s unimaginably far away for planning. If MacAskill suggesting that we should care about the long-term future gets people to discuss AI-risk, and I think we’d all agree it has, then we’re all better off for it.

Ditto seeing how little action climate change receives, for all the attention it gets. And the same for pandemic prevention. It’s even worse for nuclear war prevention, or food supply security, which don’t even get attention. And to be clear, all of these seem like they are obviously under-resourced with a discount rate of 2%, rather than MackAskill’s suggested 0%. I’d argue this is true for the neglected issues even if we were discounting at 5%, where the 30-year future is only worth about a quarter as much as the present - though the case for economic reactions to climate change like imposing a tax of $500/ton CO2, which I think is probably justified using a more reasonable discount rate, is harmed.


14: BK writes:

Stealing my own comment from a related reddit thread on MacAskill: “The thing I took away from [his profile in the New Yorker] is that contrary to “near-termist” views, longtermism has no effective feedback mechanism for when it’s gone off the rails.

As covered in the review of The Antipolitics Machine, even neartermist interventions can go off the rails. Even simple, effective interventions like bednets are resulting in environmental pollution or being used as fishing nets! But at least we can pick up on these mistakes after a couple of years, and course correct or repriotise.

With longtermist views, there is no feedback mechanism on unforeseen externalities, mistaken assumptions, etc. All you get at best in deontological assessments like “hmmm, they seem to be spending money on nice offices instead of doing the work”, as covered in the article, or maybe “holy crap they’re speeding up where we want them to slow down!” The need for epistemic humility in light of exceedingly poor feedback mechanisms calls for a deprioritisation of longtermist concerns compared to what is currently the general feel in what is communicated from the community.”

I agree this is a consideration, but I don’t think we should elevate good feedback mechanisms into the be-all-and-end-all of decision-making criteria.

Consider smashing your toes with a hammer. It has a great feedback mechanism; if you’re not in terrible pain, you probably missed, and you should re-check your aim. In contrast, trying to cure cancer has very poor feedback; although you might have subgoals like “kill tumor cells in a test tube”, you can never be sure that those subgoals are really on the path to curing cancer (lots of things that kill tumor cells in a test tube are useless in real life).

But this doesn’t mean that people currently trying to cure cancer should switch to trying to smash their toes with a hammer. If something’s important, then the lack of a good feedback mechanism should worry you but not necessarily turn you off entirely.


15: Mentat Saboteur writes:

> MacAskill introduces long-termism with the Broken Bottle hypothetical: you are hiking in the forest and you drop a bottle. It breaks into sharp glass shards. You expect a barefoot child to run down the trail and injure herself. Should you pick up the shards? What if it the trail is rarely used, and it would be a whole year before the expected injury? What if it is very rarely used, and it would be a millennium?

This is a really bad hypothetical! I’ve done a lot of barefoot running. The sharp edges of glass erode very quickly, and glass quickly becomes pretty much harmless to barefoot runners unless it has been recently broken (less than a week in most outdoor conditions). Even if it’s still sharp, it’s not a very serious threat (I’ve cut my foot fairly early in a run and had no trouble running many more miles with no lasting harm done). When you run barefoot you watch where you step and would simply not step on the glass. And trail running is extremely advanced for barefooters - rocks and branches are far more dangerous to a barefoot runner than glass, so any child who can comfortably run on a trail has experience and very tough feet, and would not be threatened by mere glass shards. This is a scenario imagined by someone who has clearly never ran even a mile unshod.

Thanks, now I don’t have to be a long-termist! Heck, if someone can convince me that water doesn’t really damage fancy suits, I won’t have to be an altruist at all!