[Disclaimer: I’m not an AI policy person, the people who are have thought about these scenarios in more depth, and if they disagree with this I’ll link to their rebuttals]

Some people argue against delaying AI because it might make China (or someone else) “win” the AI “race”.

But suppose AI is “only” a normal transformative technology, no more important than electricity, automobiles, or computers.

Who “won” the electricity “race”? Maybe Thomas Edison, but that didn’t cause Edison’s descendants to rule the world as emperors, or make Menlo Park a second Rome. It didn’t even especially advantage America. Edison personally got rich, the overall balance of power didn’t change, and today all developed countries have electricity.

Who “won” the automobile race? Karl Benz? Henry Ford? There were many steps between the first halting prototype and widespread adoption. Benz and Ford both personally got rich, their companies remain influential today, and Mannheim and Detroit remain important auto manufacturing hubs. But other companies like Toyota and Tesla are equally important, the overall balance of power didn’t change, and today all developed countries have automobiles.

Who “won” the computer “race”? Charles Babbage? Alan Turing? John von Neumann? Steve Jobs? Bill Gates? Again, it was a long path of incremental improvements. Jobs and Gates got rich, and their hometowns are big tech hubs, but other people have gotten even richer, and the world chip manufacturing center is in Taiwan now for some reason. The overall balance of power didn’t change (except maybe during a brief window when the Bombes broke Enigma) and today all developed countries have computers.

The most consequential “races” have been for specific military technologies during wars; most famously, the US won the “race” for nuclear weapons. America’s enemies got nukes soon afterwards, but the brief moment of dominance was enough to win World War II. Maybe in some sense the British won a “race” for radar, although it wasn’t a “race” in the sense that the Axis knew about it and was competing to get it first. Maybe in some sense countries “race” to get better fighter jets, tanks, satellites, etc than their rivals. But ordinary mortals don’t concern themselves with such things. No part of US automobile policy is based on “winning the car race” against China, in some sense where consumer car R&D will affect tanks and our military risks being left behind.

I think some people hear transhumanists talk about an “AI race” and mindlessly repeat it, without asking what assumptions it commits them to. Transhumanists talk about winning an AI “race” for two reasons:

First , because if you believe unaligned AI could destroy humanity at some point, it’s important to align AI before it gets to that point. Companies that care about alignment might race to reach that point before companies that don’t care about alignment. Right now this is all academic, because nobody knows how to align AIs. But if someone figured that out, we would want those people to win a race.1

Second , because some transhumanists think AI could cause a technological singularity that speedruns the next several millennia worth of advances in a few years. This probably only happens if superintelligent AI can figure out ways to improve its own intelligence in a critical feedback loop. I’m pretty skeptical of these scenarios in the current AI paradigm where compute is often the limiting resource, but other people disagree. In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships,.

We remember the race for nuclear weapons because they’re a binary technology - either you have them, or you don’t. When the US invented stealth bombers, its enemies had slightly worse planes that were slightly less stealthy. But when the US invented nukes, its enemies were stuck with normal bombs; there is no slightly-worse-nuke that can only destroy half a city. Everywhere outside the most extreme transhumanist scenarios, AI is more like the stealth bomber. You may have GPT-3, GPT-4, some future GPT-5, but a two year gap means you have slightly worse AIs, not that you have no AI at all. The only case where there’s a single critical point - where you either have the transformative AI or nothing - is in the hard-takeoff scenario where at a certain threshold AI recursively self-improves to infinity. If someone reaches this threshold before you do, then you’ve lost a race!2

Everyone I know who believes in fast takeoffs is a doomer. There’s no way you go to sleep with a normal only-slightly-above-human-level AI, wake up with the AI having godlike powers, and the AI is still doing what you want. You have no chance to debug the AI at level N and get it ready for level N+1. You skip straight from level N to level N + 1,000,000. The AI is radically rewriting its code many times in a single night. You are pretty doomed.

If you don’t believe in crazy science fiction scenarios like these, fine. But then why are you so sure that it’s crucial to “win” the AI “race”? If you’re sure these kinds of thing won’t happen, then you should treat AI like electricity, automobiles, or stealth bombers. It might tip the balance of a badly timed war, but otherwise you can just steal the tech and catch up.

I’m harping on this point because a lot of people want to have it both ways. They say we shouldn’t care about alignment, because AI will just be another technology. But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”. If AI is just another technology, we don’t need to worry about this! And in the scenarios where you do need to win races, you really want to worry about alignment.

“Wouldn’t Xi Jinping put people in camps?” Why? He put the Uighurs in camps because he was afraid they would revolt against Chinese rule. Nobody can revolt against someone who controls a technological singularity, so why put them in camps?

“Wouldn’t Joe Biden overregulate small business?” There won’t be small business! If you want to build a customized personal utopian megastructure, you won’t hire a small business, you’ll just say “AI, build me a customized personal utopian megastructure” and it will materialize in front of you. Probably you should avoid doing this in a star system someone else owns, but there will be enough star systems to go around. If people insist on having an economy for old time’s sake, you can just build a Matrioshka brain the size of Jupiter, ask it which policies are good for the economy, then do those ones.

“Wouldn’t Mark Zuckerberg perpetuate structural racism?” You will be able to change your race, age, gender, species, and state of matter at will. Nobody will even remember what race you were. If for some reason the glowing clouds of plasma that used to be black people have smaller customized personal utopian megastructures than the glowing clouds of plasma that used to be white people, you can ask the brain the size of Jupiter how to solve it, and it will tell you (I bet it involves using slightly different euphemisms for things, that’s always been the answer so far).

People come up with these crazy stories about “winning races” that don’t matter without a technological singularity - then act like any of their current issues will still matter after a technological singularity. Sorry, no, it will be weirder than that.

Whoever ends up in control of the post-singularity world will find that there’s too much surplus for dividing-the-surplus problems to feel compelling anymore. As long as they’re not actively a sadist who wants to hurt people, they can just let people enjoy the technological utopia they’ve created, and implement a few basic rules like “if someone tries to punch someone else, the laws of physics will change so that your hand phases through their body”.

And yeah, that “they’re not actively a sadist” clause is doing a lot of work. I want whoever rules the post-singularity future to have enough decency to avoid ruining it, and to take the Jupiter-sized brain’s advice when it has some. I think any of Xi, Biden, or Zuckerberg meet this low bar. There are some ideologues and terrible people who don’t, but they seem far away from the cutting edge of AI.

This isn’t to say the future won’t have controversial political issues. Should you be allowed to wirehead yourself so thoroughly that you never want to stop? In what situations should people be allowed to have children? (surely not never, but also surely not creating a shockwave of trillions of children spreading at near-light-speed across the galaxy). Who gets the closest star systems? (there will be enough star systems to go around, but I assume the ones closer to Earth will be higher status) What kind of sims can you voluntarily consent to participate in? I’m okay with these decisions being decided by the usual decision-making methods of the National People’s Congress, the US constitution, or Meta’s corporate charter. At the very least, I don’t think switching from one of these to another is a big enough deal that it should trade off against the chance we survive at all.

  1. Or, rather, we’d want everyone to cooperate in implementing their solution. But if we can’t get this, then second-best would be for the good guys to win a race.

  2. Even in the unlikely scenario where AI causes a singularity and remains aligned, I have trouble worrying too much about races. The whole point of a singularity is that it’s hard to imagine what happens on the other side of it. I care a lot how much relative power Xi Jinping, Mark Zuckerberg, and Joe Biden have today, but I don’t know how much I care about them after a singularity.