Absurdity Bias, Neom Edition
Alexandros M expresses concern about my post on Neom.
My post mostly just makes fun of Neom. My main argument against it is absurdity: a skyscraper the height of WTC1 and the length of Ireland? Come on, that’s absurd!
But isn’t the absurdity heuristic a cognitive bias? Didn’t lots of true things sound absurd before they turned out to be true (eg evolution, quantum mechanics)? Don’t I specifically believe in things many people have found self-evidently absurd (eg the multiverse, AI risk)? Shouldn’t I be more careful about “this sounds silly to me, so I’m going to make fun of it”?
Can I convince you to read the sequences? There are some real underappreciated classics. (excerpt edited to remove examples that someone would misinterpret and start a flame war over)
Here’s a possible argument why not: everything has to bottom out in absurdity arguments at some level or another.
Suppose I carefully calculated that, with modern construction techniques, building Neom would cost 10x more than its allotted budget. This argument contains an implied premise: “and the Saudis can’t construct things 10x cheaper than anyone else”. How do we know the Saudis can’t construct things 10x cheaper than anyone else? The argument itself doesn’t prove this; it’s just left as too absurd to need justification.
Suppose I did want to address this objection. For example, I carefully researched existing construction projects in Saudi Arabia, checked how cheap they were, calculated how much they could cut costs using every trick available to them, and found it was less than 10x? My argument still contains the implied premise “there’s no Saudi conspiracy to develop amazing construction technology and hide it from the rest of the world”. But this is another absurdity heuristic - I have no argument beyond that such a conspiracy would be absurd. I might eventually be able to come up with an argument supporting this, but that argument, too, would have implied premises depending on absurdity arguments.
So how far down this chain should I go? One plausible answer is “just stop at the first level where your interlocutors accept your absurdity argument”. Anyone here think Neom’s a good idea? No? Even Alexandros agrees it probably won’t work. So maybe this is the right level of absurdity. If I was pitching my post towards people who mostly thought Neom was a good idea, then I might try showing that it would cost 10x more than its expected budget, and see whether they agreed with me that Saudis being able to construct things 10x cheaper than anyone else was absurd. If they did agree with me, then I’ve hit the right level of argument. And if they agree with me right away, before I make any careful calculations, then it was fine for me to just point to it and gesture “That’s absurd!”
I think this is basically the right answer for communications questions, like how to structure a blog post. When I criticize communicators for relying on the absurdity heuristic too much, it’s because they’re claiming to adjudicate a question with people on both sides, but then retreating to absurdity instead. When I was young a friend recommended me a pseudoscience book on ESP, with lots of pseudoscientific studies proving ESP was real. I looked for skeptical rebuttals, and they were all “Ha ha! ESP? That’s absurd, you morons!” These people were just clogging up Google search results that could have been giving me real arguments. But if nobody has ever heard of Neom, and I expect my readers to immediately agree that Neom is absurd, then it’s fine (in a post describing Neom rather than debating it) to stop at the first level.
(I do worry that it might be creating an echo chamber; people start out thinking Neom is a bad idea for the obvious reasons, then read my post and think “and ACX also thinks it’s a bad idea” is additional evidence; I think my obligation here is to not exaggerate the amount of thought that went into my assessment, which I hope I didn’t.)
But the absurdity bias isn’t just about communication. What about when I’m thinking things through in my head, alone? I’m still going to be asking questions like “is Neom possible?” and having to decide what level of argument to stop at.
To put it another way: which of your assumptions do you accept vs. question? Question none of your assumptions, and you’re a closed-minded bigot. Question all of your assumptions, and you get stuck in an infinite regress. The only way to escape (outside of a formal system with official axioms) is to just trust your own intuitive judgment at some point. So maybe you should just start out doing that.
Except that some people seem to actually be doing something wrong. The guy who hears about evolution and says “I know that monkeys can’t turn into humans, this is so absurd that I don’t even have to think about the question any further” is doing something wrong. How do you avoid being that guy?
Some people try to dodge the question and say that all rationality is basically a social process. Maybe on my own, I will naturally stop at whatever level seems self-evident to me. Then other people might challenge me, and I can reassess. But I hate this answer. It seems to be preemptively giving up and hoping other people are less lazy than you are. It’s like answering a child’s question about how to do a math problem with “ask a grown-up”. A coward’s way out!
Eliezer Yudkowsky gives his answer here:
I can think of three major circumstances where the [useful] absurdity heuristic gives rise to a [bad] absurdity bias:
The first case is when we have information about underlying laws which should override surface reasoning. If you know why most objects fall, and you can calculate how fast they fall, then your calculation that a helium balloon should rise at such-and-such a rate, ought to strictly override the absurdity of an object falling upward. If you can do deep calculations, you have no need for qualitative surface reasoning. But we may find it hard to attend to mere calculations in the face of surface absurdity, until we see the balloon rise.
(In 1913, Lee de Forest was accused of fraud for selling stock in an impossible endeavor, the Radio Telephone Company: “De Forest has said in many newspapers and over his signature that it would be possible to transmit human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public…has been persuaded to purchase stock in his company…”)
The second case is a generalization of the first - attending to surface absurdity in the face of abstract information that ought to override it. If people cannot accept that studies show that marginal spending on medicine has zero net effect, because it seems absurd - violating the surface rule that “medicine cures” - then I would call this “absurdity bias”. There are many reasons that people may fail to attend to abstract information or integrate it incorrectly. I think it worth distinguishing cases where the failure arises from absurdity detectors going off.
The third case is when the absurdity heuristic simply doesn’t work - the process is not stable in its surface properties over the range of extrapolation - and yet people use it anyway. The future is usually “absurd” - it is unstable in its surface rules over fifty-year intervals.
This doesn’t mean that anything can happen. Of all the events in the 20th century that would have been “absurd” by the standards of the 19th century, not a single one - to the best of our knowledge - violated the law of conservation of energy, which was known in 1850. Reality is not up for grabs; it works by rules even more precise than the ones we believe in instinctively.
The point is not that you can say anything you like about the future and no one can contradict you; but, rather, that the particular practice of crying “Absurd!” has historically been an extremely poor heuristic for predicting the future. Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying “I don’t know”.
This is all true as far as it goes, but it’s still just rules for the rare situations when your intuitive judgments of absurdity are contradicted by clear facts that someone else is handing you on a silver platter. But how do you, pondering a question on your own, know when to stop because a line of argument strikes you as absurd, vs. to stick around and gather more facts and see whether your first impressions were accurate?
I don’t have a great answer here, but here are some parts of a mediocre answer:
-
Calibration training. Make predictions so you know how often you’re right vs. wrong about things. If the things you say only have a 1% chance of happening happen a third of the time, you know you’re stopping too soon when you make absurdity arguments.
-
Do the social epistemology thing, regardless of whether or not it’s a coward’s way out. Honestly, someone who is able to re-examine their absurdity heuristics after someone else they trust asks them to and hands them the facts on a silver platter - is still doing better than 99.9% of people in the world.
-
Maybe, every so often, do a deep dive into fact-checking something, even if you’re absolutely sure it’s true. Maybe if everybody does this, then someone will (by coincidence) catch the false absurdities, and then the social epistemology thing can work.
-
Examine why a belief has even come to your attention in the first place. If you inexplicably decide to investigate the possibility that a random number between one and a million will come up as 282,058, then you can dismiss it with little thought, because you had no reason to believe it in the first place. The only reason “Neom is possible” deserves scrutiny is because the Saudi government claims that it is; in order to dismiss it as absurd, I need to explain why the Saudi government would waste $500 billion on an obviously absurd idea. This is easy: their king is a megalomaniac, plus people are afraid to voice dissent. I admit this process is pretty much the same thing as Bulverism and bias arguments, which I hate and which always fail. Too bad, there is no royal road. Sometimes there isn’t even a muddy goat path.