Kelly Bets On Civilization
Scott Aaronson makes the case for being less than maximally hostile to AI development:
Here’s an example I think about constantly: activists and intellectuals of the 70s and 80s felt absolutely sure that they were doing the right thing to battle nuclear power. At least, I’ve never read about any of them having a smidgen of doubt. Why would they? They were standing against nuclear weapons proliferation, and terrifying meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning the water and soil and causing three-eyed fish. They were saving the world. Of course the greedy nuclear executives, the C. Montgomery Burnses, claimed that their good atom-smashing was different from the bad atom-smashing, but they would say that, wouldn’t they?
We now know that, by tying up nuclear power in endless bureaucracy and driving its cost ever higher, on the principle that if nuclear is economically competitive then it ipso facto hasn’t been made safe enough, what the antinuclear activists were really doing was to force an ever-greater reliance on fossil fuels. They thereby created the conditions for the climate catastrophe of today. They weren’t saving the human future; they were destroying it. Their certainty, in opposing the march of a particular scary-looking technology, was as misplaced as it’s possible to be. Our descendants will suffer the consequences.
Read carefully, he and I don’t disagree. He’s not scoffing at doomsday predictions, he’s more arguing against people who say that AIs should be banned because they might spread misinformation or gaslight people or whatever.
Still, I think about this argument a lot. I agree he’s right about nuclear power. When it comes out in a few months, I’ll be reviewing a book that makes this same point about institutional review boards: that our fear of a tiny handful of deaths from unethical science has caused hundreds of thousands of deaths from delaying ethical and life-saving medical progress. The YIMBY movement makes a similar point about housing: we hoped to prevent harm by subjecting all new construction to a host of different reviews - environmental, cultural, equity-related - and instead we caused vast harm by creating an epidemic of homelessness and forcing the middle classes to spend increasingly unaffordable sums on rent. This pattern typifies the modern age; any attempt to restore our rightful utopian flying-car future will have to start with rejecting it as vigorously as possible.
So how can I object when Aaronson turns the same lens on AI?
First, you are allowed to use Inside View. If Osama bin Laden is starting a supervirus lab, and objects that you shouldn’t shut him down because “in the past, shutting down progress out of exaggerated fear of potential harm has killed far more people than the progress itself ever could”, you are permitted to respond “yes, but you are Osama bin Laden, and this is a supervirus lab.” You don’t have to give every company trying to build the Torment Nexus a free pass just because they can figure out a way to place their work in a reference class which is usually good. All other technologies fail in predictable and limited ways. If a buggy AI exploded, that would be no worse than a buggy airplane or nuclear plant. The concern is that a buggy AI will pretend to work well, bide its time, and plot how to cause maximum damage while undetected. Also it’s smarter than you. Also this might work so well that nobody realizes they’re all buggy until there are millions of them.
But maybe opponents of every technology have some particular story why theirs is a special case. So let me try one more argument, which I think is closer to my true objection.
There’s a concept in finance called Kelly betting. It briefly gained some fame last year as a thing that FTX failed at, before people realized FTX had failed at many more fundamental things. It works like this (warning - I am bad at math and may have gotten some of this wrong): suppose you start with $1000. You’re at a casino with one game: you can, once per day, bet however much you want on a coin flip, double-or-nothing. You’re slightly psychic, so you have a 75% chance of guessing the coin flip right. That means that on average, you’ll increase your money by 50% each time you bet. Clearly this is a great opportunity. But how much do you bet per day?
Tempting but wrong answer: bet all of it each time. After all, on average you gain money each flip - each $1 invested in the coin flip game becomes $1.50. If you bet everything, then after five coin flips you’ll have (on average) $7,500. But if you just bet $1 each time , then (on average), you’ll only have $1,008. So obviously bet as much as possible, right?
But after five coin flips of $1000, there’s an 76% chance that you’ve lost all your money. Increase to 50 coin flips, and there’s a 99.999999….% chance that you’ve lost all your money. So although technically this has the highest “average utility”, all of this is coming from one super-amazing sliver of probability-space where you own more money than exists in the entire world. In every other timeline, you’re broke.
So how much should you bet? $1 is too little. These flips do, on average, increase your money by 50%; it would take forever to get anywhere betting $1 at a time. You want something that’s high enough to increase your wealth quickly, but not so high that it’s devastating and you can’t come back from it on the rare occasions when you lose.
In this case, if I understand the Kelly math right, you should bet half each time. But the lesson I take from this isn’t just the exact math. It’s: even if you know a really good bet, don’t bet everything at once.
Science and technology are great bets. Their benefits are much greater than their harms. Whenever you get a chance to bet something significantly less than everything in the world on science or technology, you should take it. Your occasional losses will be dwarfed by your frequent and colossal gains. If we’d gone full-speed-ahead on nuclear power, we might have had one or two more Chernobyls - but we’d save the tens of thousands of people who die each year from fossil-fuel-pollution-related diseases, end global warming, and have unlimited cheap energy.
But science and technology aren’t perfect bets. Gain-of-function research on coronaviruses was a big loss. Leaded gasoline, chlorofluorocarbon-based refrigerants, thalidomide for morning sickness - all of these were high-tech ideas that ended up going badly, not to mention all the individual planes that crashed or rockets that exploded.
Society (mostly) recovered from all of these. A world where people invent gasoline and refrigerants and medication (and sometimes fail and cause harm) is vastly better than one where we never try to have any of these things. I’m not saying technology isn’t a great bet. It’s a great bet!
But you never bet everything you’ve got on a bet, even when it’s great. Pursuing a technology that could destroy the world is betting 100%.
It’s not that you should never do this. Every technology has some risk of destroying the world; the first time someone tried vaccination, there was an 0.000000001% chance it could have resulted in some weird super-pathogen that killed everybody. I agree with Scott Aaronson: a world where nobody ever tries to create AI at all, until we die of something else a century or two later, is pretty depressing.
But we have to consider them differently than other risks. A world where we try ten things like nuclear power, each of which has a 50-50 chance of going well vs. badly, is probably a world where a handful of people have died in freak accidents but everyone else lives in safety and abundance.
A world where we try ten things like AI, same odds, has a 1/1024 chance of living in so much abundance we can’t possibly conceive of it - and a 1023/1024 chance we’re all dead.