Grading My 2018 Predictions For 2023
To celebrate the fifth anniversary of my old blog, in 2018, I made some predictions about what the next five years would be like.
This was a different experience than my other predictions. Predicting five years out doesn’t feel five times harder than predicting one year out. It feels fifty times harder. Not a lot of genuinely new trends can surface in one year; you’re limited to a few basic questions on how the current plotlines will end. But five years feels like you’re really predicting “the future”. Things felt so fuzzy that I (partly) abandoned my usual clear-resolution probabilistic predictions for total guesses.
Last week was the tenth anniversary of my old blog (I accept your congratulations), so it’s time to look back on my terrible doomed 2018 predictions and see how I did at predicting the last half decade, starting with:
Artificial Intelligence
2018 was before the birth of GPT-2, the first decent language model, so even including this category was pretty bold. I wrote:
AI will be marked by various spectacular achievements, plus nobody being willing to say the spectacular achievements signify anything broader. AI will beat humans at progressively more complicated games, and we will hear how games are totally different from real life and this is just a cool parlor trick. If AI translation becomes
flawlessoutstanding, we will hear how language is just a formal system that can be brute-forced without understanding. If AI can generate images and even stories to a prompt, everyone will agree this is totally different from real art or storytelling. Nothing that happens in the interval until 2023 will encourage anyone to change this way of thinking. There will not be a Truckpocalypse before 2023. Technological unemployment will continue to be a topic of academic debate that might show up if you crunch the numbers just right, but there will be no obvious sign that it is happening on a large scale. Everyone will tell me I am wrong about this, but I will be right, and they will just be interpreting other things (change in labor force composition, change in disability policies, effects of outsourcing, etc) as obvious visible signs of technological unemployment, the same as people do now. AI safety concerns will occupy about the same percent of the public imagination as today.1. Average person can hail a self-driving car in at least one US city: 80%
2. …in at least five of ten largest US cities: 30%
3. At least 5% of US truck drivers have been replaced by self-driving trucks: 10%
4. Average person can buy a self-driving car for less than $100,000: 30%
5. AI beats a top human player at Starcraft: 70%
6. MIRI still exists in 2023: 80%
7. AI risk as a field subjectively feels more/same/less widely accepted than today: 50%/40%/10%
I think I nailed this.
I don’t know how I even came up with “AI can generate images and even stories to a prompt” as a possibility! I didn’t even think it was on the radar back then!
Two small quibbles: nobody is talking about technological unemployment, because unemployment rates are historically low. And AI safety concerns might occupy a very slightly larger percent of the public imagination.
I grade 1, 5, and 6 as coming true; 2, 3, and 4 as not coming true, and 7 as “more”, all of which I got directionally correct.
Overall grade: A
World Affairs
In 2018, the UK was debating how to Brexit, Syria was winding down its civil war, and ISIS was still considered a threat. I wrote:
The European Union will not collapse. It will get some credibility from everyone hating its enemies – Brexit, the nationalist right, etc – and some more credibility by being halfway-competent at its economic mission. Nobody will secede from anywhere. The crisis of nationalism will briefly die down as the shock of Syrian refugees wears off, then reignite (possibly after 2023) with the focus on African migrants. At some point European Muslims may decide they don’t like African migrants much either, at which point there may be some very weird alliances.
1. UK leaves EU (or still on track to do so): 95%
2. No “far-right” party in power (executive or legislative) in any of France, Germany, UK, Italy, Netherlands, Sweden, at any time: 50%
3. No other country currently in EU votes to leave: 50%
4. No overt major power war in the Middle East (Israel spending a couple weeks destroying stuff in Lebanon doesn’t count): 60%
5. Mohammed bin Salman still in power in Saudi Arabia in 2023: 60%
6. Sub-Saharan Africa averages GDP growth greater than 2.5% over 2018 – 2023: 60%
7. Vladimir Putin is still in charge of Russia: 70%
8. If there’s a war in the Middle East where US intervention is plausible, US decides to intervene (at least as much as it did in Syria): 70%Countries that may have an especially good half-decade: Israel, India, Nigeria, most of East Africa, Iran. Countries that may have an especially bad half-decade: Russia, Saudi Arabia, South Africa, UK. The Middle East will get worse before it gets better, especially Lebanon and the Arabian Peninsula (Syria might get better, though).
I think these were boring cowardly nothing-ever-happens predictions that mostly came true. Various feared EU crises did not materialize. There was no African migrant crisis, but I predicted that might come after 2023 anyway. Unsurprisingly I missed the biggest geopolitical story of this period, the Ukraine war.
I grade 1, 3, 4, 5, and 7 as true, and 2 and 6 as false. I don’t think my country predictions were especially good or bad, except that Russia and the UK have indeed been having a hard time. The Middle East as a whole did not get worse. Lebanon did have an economic collapse but has stayed relatively politically stable; the Arabian Peninsula is doing pretty well with a cease-fire still hanging on in Yemen.
Overall grade: B
US Culture
Religion will continue to retreat from US public life. As it becomes less important, mainstream society will treat it as less of an outgroup and more of a fargroup. Everyone will assume Christians have some sort of vague spiritual wisdom, much like Buddhists do. Everyone will agree evangelicals or anyone with a real religious opinion is just straight-out misinterpreting the Bible, the same way any Muslim who does something bad is misinterpreting the Koran. Christian mysticism will become more popular among intellectuals. Lots of people will talk about how real Christianity opposes capitalism. There may not literally be a black lesbian Pope, but everyone will agree that there should be, and people will become mildly surprised when you remind them that the Pope is white, male, and sexually inactive.
The culture wars will continue to be marked by both sides scoring an unrelenting series of own-goals, with the victory going to whoever can make their supporters shut up first. The best case scenario for the Right is that Jordan Peterson’s ability to not instantly get ostracized and destroyed signals a new era of basically decent people being able to speak out against social justice; this launches a cascade of people doing so, and the vague group consisting of Jordan Peterson, Sam Harris, Steven Pinker, Jonathan Haidt, etc coalesces into a perfectly respectable force no more controversial than the gun lobby or the pro-life movement or something. With social justice no longer able to enforce its own sacredness values against blasphemy, it loses a lot of credibility and ends up no more powerful or religion-like than eg Christianity. The best case scenario for the Left is that the alt-right makes some more noise, the media is able to relentlessly keep everyone’s focus on the alt-right, the words ALT-RIGHT get seared into the public consciousness every single day on every single news website, and everyone is so afraid of being associated with the alt-right that they shut up about any disagreements with the consensus they might have. I predict both of these will happen, but the Right’s win-scenario will come together faster and they will score a minor victory.
1. Church attendance rates lower in 2023 than 2018: 90%
2. At least one US politician, Congressman or above, explicitly identifies as alt-right (in more than just one off-the-cuff comment) and refuses to back down or qualify: 10%
3. …is overtly racist (says eg “America should be for white people” or “White people are superior” and means it, as a major plank of their platform), refuses to back down or qualify: 10%
4. Gay marriage support rate is higher on 1/1/2023 than 1/1/2018: 95%
5. Percent transgender is higher on 1/1/2023 than 1/1/2018: 95%
6. Social justice movement appear less powerful/important in 2023 than currently: 60%
I think all of this is basically true, though I’m probably judging this through the same biased idiosyncratic social lens that I used in 2018 to see these as rising trends, so I’m not too impressed with myself.
I judge 1, 4, 5, and 6 as having happened, and 2 and 3 as not having happened, making me directionally correct on all predictions. You might think these were too easy, but I made them because in 2018 a lot of people were panicking about a (probably poorly handled) poll saying that support for gay rights was collapsing In The Age Of Trump, and I was pushing back against that. Time has proven me right.
Overall grade: B+
US Politics
2018 was the middle of the Trump administration. It was also the Socialist Moment when people thought something Bernie something something Chapo Trap House meant the far-left was on the rise. I wrote:
The crisis of the Republican Party will turn out to have been overblown. Trump’s policies have been so standard-Republican that there will be no problem integrating him into the standard Republican pantheon, plus or minus some concerns about his personality which will disappear once he personally leaves the stage. Some competent demagogue (maybe Ted Cruz or Mike Pence) will use some phrase equivalent to “compassionate Trumpism”, everyone will agree it is a good idea, and in practice it will be exactly the same as what Republicans have been doing forever. The party might move slightly to the right on immigration, but this will be made easy by a fall in corporate demand for underpriced Mexican farm labor, and might be trivial if there’s a border wall and they can declare mission accomplished. If the post-Trump standard-bearer has the slightest amount of personal continence, he should end up with a more-or-less united party who view Trump as a flawed but ultimately positive figure, like how they view GW Bush. Also, I predict we see a lot more of Ted Cruz than people are expecting.
On the other hand, everyone will have underestimated the extent of crisis in the Democratic Party. The worst-case scenario is Kamala Harris rising to the main contender against Bernie Sanders in the 2020 primary. Bernie attacks her and her followers as against true progressive values, bringing up her work defending overcrowded California prisons as a useful source of unpaid labor. Harris supporters attack Bernie as a sexist white man trying to keep a woman of color down (wait until the prison thing gets described as “slavery”). Everything that happened in 2016 between Clinton and Sanders looks like mild teasing between friends in comparison. If non-Sanderites rally around Booker or Warren instead, the result will be slightly less apocalyptic but still much worse than anyone expects. The only plausible way I can see for the Dems to avoid this is if Sanders dies or becomes too sick to run before 2020. This could tear apart the Democratic Party in the long-term, but in the short term it doesn’t even mean they won’t win the election – it will just mean a bunch of people who loathe each other temporarily hold their nose and vote against Trump.
It will become more and more apparent that there are three separate groups: progressives, conservatives, and neoliberals. How exactly they sort themselves into two parties is going to be interesting. The easiest continuation-of-current-trends option is neoliberals+progressives vs. conservatives, with neoliberals+progressives winning easily. But progressives are starting to wonder if neoliberals’ support is worth the watering-down of their program, and neoliberals are starting to wonder if progressives’ support is worth constantly feeding more power to people they increasingly consider crazy. The Republicans used some weird demonic magic to hold together conservatives and neoliberals for a long time; I suspect the Democrats will be less good at this. A weak and fractious Democratic coalition plus a rock-hard conservative Republican non-coalition might be stable under Median Voter Theorem considerations. For like ten years. Until there are enough minorities that the Democrats are just overwhelmingly powerful (no, minorities are not going to start identifying as white and voting Republican en masse). I have no idea what will happen then. Maybe the Democrats will go extra socialist, the neoliberals and market minorities will switch back to the Republicans, and we can finally have normal reasonable class warfare again instead of whatever weird ethno-cultural thing is happening now?
1. Trump wins 2020: 20%
2. Republicans win Presidency in 2020: 40%
3. Sanders wins 2020: 10%
4. Democrats win Presidency in 2020: 60%
5. At least one US state has approved single-payer health-care by 2023: 70%
6. At least one US state has de facto decriminalized hallucinogens: 20%
7. At least one US state has seceded (de jure or de facto): 1%
8. At least 10 members of 2022 Congress from neither Dems or GOP: 1%
9. US in at least new one major war (death toll of 1000+ US soldiers): 40%
10. Roe v. Wade substantially overturned: 1%
11. At least one major (Obamacare-level) federal health care reform bill passed: 20%
12. At least one major (Brady Act level) federal gun control bill passed: 20%
13. Marijuana legal on the federal level (states can still ban): 40%
14. Neoliberals will be mostly Democrat/evenly split/Republican in 2023: 60%/20%/20%
15. Political polarization will be worse/the same/better in 2023: 50%/30%/20%
Basically none of this happened.
The Republican Party hasn’t moved on from Trump in any direction. They have stayed exactly at Trump. Ron DeSantis seems personally successful and good at inciting culture war panics, but I don’t think there is a “DeSantis-ism” that offers a particular vision of 21st century conservatism. Ted Cruz remains irrelevant.
The Democrats have not had a crisis. They went with Joe Biden, a likeable compromise candidate who I didn’t even mention as a possibility, and it worked. Kamala Harris didn’t even get close to becoming president, although Biden made the extremely predictable mistake of making her VP.
The neoliberal/progressive split continues to exist, but I don’t think it’s tenser than in 2018, and might even be less tense now that socialists have stopped having their Moment.
I count predictions 4, 6, and 10 as having happened, and 1, 2, 3, 5, 7, 8, 9, 11, 12, and 13 as not having happened. I’m resolving 14 as Democrat, 15 as the same. My biggest failure here was 10, where I gave Roe vs. Wade only a 1% chance (!) of being overturned! Looking back, in early 2018 the court was5-4 Democra t [edit: 5-4 Republican, but one of them was Kennedy who wasn’t going to overturn Roe], and one of the Republicans was John Roberts, who’s moderate and hates change. I was thinking the court would need two new Republicans, which was a lot to ask of a half-over presidential term, and which required Republicans to keep the Senate during the midterms. And even if the two new justices arrived, overturning Roe would be a startling and unusual break with precedent; even if the justices wanted to restrict abortion, I expected them to do something which kept a fig leaf of not having overturned Roe. And even if I was totally wrong, I expected it to take more than five years for all of this to happen. But in fact they got two more Republican justices, they were willing to break with precedent, and they did it fast.
Looking back I probably had enough information that I should have put this at more like 5% - 10%. I’m not sure I had enough information to go higher than that, but it sure is embarrassing.
Overall grade: F
Economics
First World economies will increasingly be marked by an Officialness Divide. Rich people, the government, and corporations will use formal, well-regulated, traditional institutions. Poor people (and to an increasing degree middle-class people) will use informal gig economies supported by Silicon Valley companies whose main skill is staying a step ahead of regulators. Think business travelers staying at the Hilton and riding taxis, vs. low-prospect twenty-somethings staying at Air BnBs and taking Ubers. As Obamacare collapses, health insurance will start turning into one of the formal, well-regulated, traditional institutions limited to college grads with good job prospects. What the unofficial version of health care will be remains to be seen. If past eras have been Stone Age, Bronze Age, Iron Age, Information Age, etc, the future may be the Ability-To-Circumvent-Regulations Age.
Cryptocurrency will neither collapse nor take over everything. It will become integrated into the existing system and regulated to the point of uselessness. No matter how private and untraceable the next generation of cryptocurrencies are, people will buy and exchange them through big corporate websites that do everything they can to stay on the government’s good side. Multinationals will occasionally debate using crypto to transfer their profits from one place to another, then decide that would make people angry and decide not to. There may be rare crypto-related accounting tricks approximately of the same magnitude as the “headquarter your company in the Cayman Islands” trick. A few cryptocurrencies might achieve the same sort of role PayPal has today, only slightly cooler. Things like Ethereum prediction markets might actually work, again mostly by being too niche for the government to care very much. A few die-hards will use pure crypto to buy drugs over the black market, but not significantly more than do so today, and the government will mostly leave them alone as too boring to crush.
1. Percent of people in US without health insurance (outside those covered by free government programs) is higher in 2023 than 2018: 80%
2. Health care costs (as % of economy) continue to increase at least as much as before: 70%
3. 1 Bitcoin costs above $1K: 80%
4. …above $10K: 50%
5. …above $100K: 5%
6. Bitcoin is still the highest market cap cryptocurrency: 40%
7. Someone figures out Satoshi’s true identity to my satisfaction: 30%
8. Browser-crypto-mining becomes a big deal and replaces ads on 10%+ of websites: 5%
I don’t think the Officialness Divide or the Ability-To-Circumvent-Regulations Age arrived in any meaningful way. I think I was riding high off the age of Uber and Bitcoin, and expected people to continue to have that level of creative/entrepreneurial spirit, and instead, they didn’t.
On the other hand, my crypto prediction seems . . . surprisingly spot-on? Commenters told me I was being silly, that either crypto would take over everything or collapse under the weight of its own uselessness. Instead it did just what I predicted. If I only I could be this prescient when actually investing.
I judge 2, 3, 4, and 6 as having happened (though 2 is confounded by COVID). 1, 5, 7, and 8 didn’t happen.
Overall grade for this section: B-
Science/Technology
Polygenic scores go public – not necessarily by 2023, but not long after. It becomes possible to look at your 23andMe results and get a weak estimate of your height, IQ, criminality, et cetera. Somebody checks their spouse’s score and finds that their desirable/undesirable traits are/aren’t genetic and will/won’t be passed down to their children; this is treated as a Social Crisis but nobody really knows what to do about it. People in China or Korea start actually doing this on a large scale. If there is intelligence enhancement, it looks like third-party services that screen your gametes for genetic diseases and just so happen to give you the full genome which can be fed to a polygenic scoring app before you decide which one to implant. The first people to do this aren’t necessarily the super-rich, so much as people who are able to put the pieces together and figure out that this is an option. If you think genetics discourse is bad now, wait until polygenic score predictors become consumerized. There will be everything from “the predictor said I would be tall but actually I am medium height, this proves genes aren’t real” to “Should we track children by genetic IQ predictions for some reason even though we have their actual IQ scores right here?” Also, the products will probably be normed on white (Asian?) test subjects and not work very well on people of other races; expect everyone to say unbelievably idiotic things about this for a while.
There will be two or three competing companies offering low-level space tourism by 2023. Prices will be in the $100,000 range for a few minutes in suborbit. The infrastructure for Mars and Moon landings will be starting to look promising, but nobody will have performed any manned landings between now and then. The most exciting edge of the possibility range is that five or six companies are competing to bring rich tourists to Bigelow space stations in orbit.
1. Widely accepted paper claims a polygenic score predicting over 25% of human intelligence: 70%
2. …50% or more: 20%
3. At least one person is known to have had a “designer baby” genetically edited for something other than preventing specific high-risk disease: 10%
4. At least a thousand people have had such babies, and it’s well known where people can go to do it: 5%
5. At least one cloned human baby, survives beyond one day after birth: 10%
6. Average person can check their polygenic IQ score for reasonable fee (doesn’t have to be very good) in 2023: 80%
7. At least one directly glutamatergic antidepressant approved by FDA: 20%
8. At least one directly neurotrophic antidepressant approved by FDA: 20%
9. At least one genuinely novel antipsychotic approved by FDA: 30%
10. MDMA approved for therapeutic use by FDA: 50%
11. Psilocybin approved for general therapeutic use in at least one country: 30%
12. Gary Taubes’ insulin resistance theory of nutrition has significantly more scholarly acceptance than today: 10%
13. Paleo diet is generally considered and recommended by doctors as best weight-loss diet for average person: 30%
14. SpaceX has launched BFR to orbit: 50%
15. SpaceX has launched a man around the moon: 50%
16. SLS sends an Orion around the moon: 30%
17. Someone has landed a man on the moon: 1%
18. SpaceX has landed (not crashed) an object on Mars: 5%
19. At least one frequently-inhabited private space station in orbit: 30%
We definitely have the technology to do the polygenic score thing. I think impute.me might provide the service I predicted, but if so, it’s made exactly zero waves - not even at the same “somewhat known among tech-literate people” level as 23andMe. From a technical point of view this was a good prediction; from a social point of view I was completely off in thinking anyone would care.
The polygenic embyro selection product exists and is available through LifeView. I can’t remember whether I knew about them in 2018 or whether this was a good prediction.
As far as I can tell, none of the space tourism stuff worked out and the whole field is stuck in the same annoying limbo as for the past decade and a half.
I count 6 and 7 as having happened (the supposedly-glutamatergic antidepressant is Auvelity, though I don’t know if that’s the real MOA), and 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 as not having happened. This lopsided ratio doesn’t necessarily mean I’m a bad predictor (I gave most of them low percent chances), but it does mean most of the exciting things that I hoped would happen didn’t.
Overall grade: C-
X-Risks
Global existential risks will hopefully not be a big part of the 2018-2023 period. If they are, it will be because somebody did something incredibly stupid or awful with infectious diseases. Even a small scare with this will provoke a massive response, which will be implemented in a panic and with all the finesse of post-9/11 America determining airport security. Along with the obvious ramifications, there will be weird consequences for censorship and the media, with some outlets discussing other kinds of biorisks and the government wanting them to stop giving people ideas. The world in which this becomes an issue before 2023 is not a very good world for very many reasons.
1. Bioengineering project kills at least five people: 20%
2. …at least five thousand people: 5%
3. Paris Agreement still in effect, most countries generally making good-faith effort to comply: 80%
4. US still nominally committed to Paris Agreement: 60%
People on the subreddit were impressed with this, since it mentioned mishandling infectious disease, heavy-handed government response, and resulting media censorship.
But I don’t want to take too much credit here - I was thinking of something much more obviously artificial than COVID (even if it does end up to have been a lab leak), and heavy-handed government response in the sense of cracking down on bio research. That was almost the only area in which the government’s response wasn’t heavy-handed!
Really all that this proves is that, like every rationalist, I’ve been in a constant state of mild panic about pandemic-related risks since forever. I don’t think I got any particular details of COVID right.
I grade 3 and 4 as having happened, and 1 and 2 as not having done so.
Overall grade: B
Overall Thoughts
It was hard to make specific predictions about things five years in advance.
I made vague predictions, but it was hard to tell what to think of them. Some took the form of “things won’t change”, and that was true. Is this always a good bet? Is it picking up pennies in front of a steamroller? Sometimes I feel like I boldly said things wouldn’t change when everyone else thought they would go crazy; am I remembering right? How much credit do I get for this?
The prediction I am most proud of is the (admittedly conditional, not strongly asserted) possibility that AIs would be able to generate stories and images to a prompt. The prediction I’m least proud of is that Roe v. Wade definitely wouldn’t be overturned.
I can’t tell if I was better at predicting technical rather than social issues. If so, I’m not sure whether it was because that’s my strength, because that’s inherently easier, or because I said vague things about technical issues but foolishly said specific things about social issues.
Overall these were neither particularly great nor particularly bad. I might have stronger opinions if more people tried this exercise and did better/worse than me.
Predictions For 2028?
There can’t possibly be a way this ends other than me getting things horrendously wrong and looking like an idiot, to be mocked by people who have never tried making formal predictions themselves. I’m going to get in so much trouble and it will be terrible.
Still, for the sake of completeness, and of recording what I believed in 2023 down for all time, here are some vague thoughts, heuristics, and fields that I’m using to think about the next five years. All otherwise undated predictions are about 1/1/2028.
AGE OF MIRACLES AND WONDERS: We seem to be in the beginning of a slow takeoff. We should expect things to get very strange for however many years we have left before the singularity. So far the takeoff really is glacially slow (everyone talking about the blindingly fast pace of AI advances is anchored to different alternatives than I am) which just means more time to gawk at stuff. It’s going to be wild. That having been said, I don’t expect a singularity before 2028.
SOLOW’S LAW: “Computers are changing everything except the productivity statistics”. Even though AIs will be dazzling and wild, they won’t immediately revolutionize the economy (cf. self-driving cars). This doesn’t mean they can’t become a $100 billion field (there are new $100 billion fields all the time!) or revolutionize a few industries, but I would be mildly surprised if they showed up as a visible break from trend on the big macroeconomic indicators (GDP, unemployment, productivity, etc). I think all of this will show up eventually, but not by 2028.
- Some big macroeconomic indicator (eg GDP, unemployment, inflation) shows a visible bump or dip as a direct effect of AI (“direct effect” excludes eg an AI-designed pandemic killing people) : **15%
**
LIMITS OF SCALING: In theory, GPT-4 will bump up against some fundamental limits of scaling (eg it will use all text ever written as efficiently as possible in its training corpus). I’ve heard various claims about easy ways to get around this, which will probably work; I expect scaling to continue to produce gains, but this is less obvious than it’s been the past five years. Training GPT-4 will cost $100M, which is a lot. Apple spends $20 billion per year on R&D, so it’s not like tech companies can’t spend more money if they want to, but after the next two OOMs it will start being bet-the-company money even for large actors. I still think it will probably happen, but all of these things might be hiccups that slow things down a little, maybe?
-
The leading big tech company (eg Google/Apple/Meta) is (clearly ahead of/approximately caught up to/clearly still behind) the leading AI-only company (DeepMind/OpenAI/Anthropic) in the quality of their AI products: (25%/50%/25%)
-
Gary Marcus can still figure out at least three semi-normal (ie not SolidGoldMagikarp style) situations where the most advanced language AIs make ridiculous errors that a human teenager wouldn’t make, more than half the time they’re asked the questions: **30%
**
ACTION TRANSFORMERS: Maybe the next big thing. This is where you can give a language model an Internet connection, tell it something like “respond to all my emails” or “order some cheap Chinese food that looks good off UberEats, my credit card number is XXXXX”, and it will do it. I think this technology will be ready in the next five years, although it might suffer from the self-driving car problem where you need more nines of reliability than it can provide. You want to be really sure it won’t respond to an email from your boss by telling her to f@#k off, or buy a Chinese restaurant instead of food from a Chinese restaurant. I think it will start as an assistant that will run all of its decisions by you, then gradually expand out from there.
-
AI can play arbitrary computer games at human level. I will count this as successful if an off-the-shelf AI, given a random computer game and some kind of API that lets it to against itself however many times it wants, can reach the performance of a mediocre human. The human programmers can fiddle with it to make it compatible with that particular game’s API, but this is expected to take a few days of work and not involve redesigning the AI from scratch: **25%
** -
As above, but the AI can’t play against itself as many times as it wants. Using knowledge it’s gained from other computer games or modalities, it has to play the new computer game about as well as a first-time human player, and improve over time at about the same rate as a first-time human player (I don’t care if it’s one order of magnitude slower, just not millions of times slower): **10%
** -
Some product like “AI plus an internal scratchpad” or “AI with stable memory” fulfills the promise of that model, and is useful enough that it gets released for some application: 50%
CONQUEST OF DIGITAL MEDIA: Can we make an AI that will create a full-length major motion picture to your specifications? IE you give it $2, say “make a Star Wars / Star Trek crossover movie, 120 minutes” and (aside from copyright concerns) it can do that? What about “code me a Assassins-Creed-quality first person shooter game, with muskets, set in the Revolutionary War?” I don’t think we’ll get quite that far in five years, but I think maybe “short cartoony YouTube clip” or “buggy app-style game” could be possible.
-
AI can make a movie to your specifications: 40% short cartoon clip that kind of resembles what you want, 2% equal in quality to existing big-budget movies.
-
AI can make deepfake porn to your specifications (eg “so-and-so dressed in a cheerleading costume having sex on a four-poster bed with such-and-such”), 70% technically possible, 30% chance actually available to average person.
-
AI does philosophy: 65% chance writes a paper good enough to get accepted to a philosophy journal (doesn’t have to actually be accepted if everyone agrees this is true)
-
AI can write poetry which I’m unable to distinguish from that of my favorite poets (Byron / Pope / Tennyson ): **70%
**
SCIENTIFIC RESEARCH: This would be the big one. I think AI will take a long time to conquer fields like biology which involve loops with the physical world (ie you have to put stuff in a test tube and see what happens); even if there are robot test-tube-fillers, anything that has to happen on a scale of seconds or minutes is fatal to AI training. But it wouldn’t surprise me if there are subfields of scientific research that tool AIs can do at superhuman levels; some aspects of drug discovery are already in this category. It’s just a matter of finding the exact right field and product. I think of AI research this way too; it won’t be trivial to make AIs design other AIs, because they still have to train them (a step that takes longer than a few seconds) and see how they work. But maybe some aspects of the process can be sped up.
-
There is (or seems about to be) a notable increase in new drug applications to the FDA because of AI doing a really great job designing drugs: 20%
-
Something else in scientific research at least that exciting: 30%
SOCIAL IMPLICATIONS: Everyone who hasn’t been looking at Bing screenshots the past week is light-years behind on thinking about this. AIs are really convincing! And likeable! Lots of people who didn’t have “get tricked into having emotions about AIs” on their list of possible outcomes are going to get tricked into having emotions about AIs. I don’t know if this will actually have any implications. Some people who want friends or romantic partners will get AI versions of these things, but even the usual type of online friend / long-distance relationship isn’t as good as IRL friends / short-distance relationships for most people, and AIs will be a step below even that. I think it will change society some but not overwhelmingly. I’m worried that smug self-righteous gatekeeper types will get even louder and more zealous in their underestimation of AI intelligence (“it’s just autocomplete!”) to feel superior to the people who say their AI girlfriend is definitely sentient. These people usually get what they want and this might have negative effects on society’s ability to think about these issues.
-
At least 350,000 people in the US are regularly (at least monthly) talking to an AI advertised as a therapist or coach. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: **5%
** -
At least 350,000 people in the US are regularly (at least weekly) talking to an AI which they consider a kind of romantic companion. I will judge this as true if some company involved reports numbers, or if I hear about it as a cultural phenomenon an amount that seems proportionate with this number: **33%
**
POLITICAL IMPLICATIONS: I think there will be more of a movement to ban or restrict AI. I think people worried about x-risks (like myself) will have to make weird decisions about how and whether to ally with communists and other people I would usually dislike (assuming they would even let us into the coalition, which seems questionable). I think there will be some pointless bills that say they’re regulating AI which actually do nothing.
-
AI not-say-bad-words-ists treat AI not-kill-everyone-ists as (clear allies/clear enemies/it’s complicated): 25% / 35% / 40%
-
AI is a (bigger/equal/smaller) political issue than abortion: **20% / 20% / 60%
**
AGI: This is a meaningless term. Some AIs may or may not satisfy some people’s criteria for AGI by 2028; if so, it will get an article in some tech publication but otherwise pass unnoticed. This doesn’t mean AGI won’t be a big deal, just that there won’t be a single moment when we obviously have it and everything changes.
ALIGNMENT: I don’t think AI safety has fully absorbed the lesson from Simulators: the first powerful AIs might be simulators with goal functions very different from the typical Bostromian agent. They might act in humanlike ways. They might do alignment research for us, if we ask nicely. I don’t know what alignment research aimed at these AIs would look like and people are going to have to invent a whole new paradigm for it. But also, these AIs will have human-like failure modes. If you give them access to a gun, they will shoot people, not as part of a 20-dimensional chess strategy that inevitably ends in world conquest, but because they’re buggy, or even angry. I think we will get plenty of fire alarms, unless simulators turn out to be a flash in the pan and easily become something else (either because humans have developed a more effective capabilities paradigm, or because some simulator AI autogenerates an agent by accident). I think this is probably our best hope right now, although I usually say that about whatever I haven’t yet heard Eliezer specifically explain why it will never work.
POLITICS/CULTURE: I think 2020 will have been a low point; things won’t get that bad and violent again in the next five years. Wokeness has peaked - but Mt. Everest has peaked, and that doesn’t mean it’s weak or irrelevant or going anywhere. Fewer people will get cancelled, but only because everyone has settled into an equilibrium where they know what the cancellable opinions are and don’t say them (or because everyone with a cancellable opinion has already been removed, or was never hired in the first place). These kinds of legacy social movements that have lost the mandate of heaven do decay and decline eventually, but it could take decades.
BIOLOGY: My model is something like: start with 1% risk of artificial pandemic catastrophe per decade in 1985, double every ten years. We’re up to about 8 - 16% per decade for the 2020s, so about halve that for the 2023 - 2028 period. By “catastrophe” I mean “worse than COVID”. I’ve been overall disappointed with advances in genetics and I don’t expect anything more interesting than one or two last-ditch treatments for rare diseases, if that. IVG probably advances but not enough to make front-page news.
- Artificial biocatastrophe (worse than COVID): 5%
INTERNATIONAL: IDK, I don’t expect a Taiwan invasion. Generally bearish on China for the usual reasons: I just think they’ve built up too much debt (literal and metaphorical), have a demographic time bomb, it’s always hard to come down from the high of fast growth, and even though their mixed centralized-ish model worked well before, I think Xi is a significant change towards traditional dictatorship which doesn’t work as well. I don’t expect this to produce any obvious explosion or disaster for them before 2028 though. I expect Ukraine and Russia to figure out some unsatisfying stalemate before 2028, followed by massive growth in Ukraine (usually happens post-war, they’ll probably get favorable terms from lots of other countries including an EU admission deal, they’re overdue for a Poland-style post-communist boom).
- Ukraine war cease-fire: 80%
ECONOMICS: IDK, stocks went down a lot because of inflation, inflation seems solveable, it’ll get solved, interest rates will go down, stocks will go up again? In terms of crypto, I’ll repeat what I said on my last crypto post: people have found some good applications for stablecoins, especially in foreign countries and for niche transfers by large actors. I expect that to continue, maybe expand, and in that sense I’m bullish, but all of this will get regulated to the point of total boringness. Ethereum will do fine because stablecoins are built on its chain, Bitcoin will do find because Bitcoin maximalists are like cockroaches and even a nuclear war couldn’t kill them, altcoins will mostly not do fine. There will still be some exciting applications for solving coordination problems and protecting privacy, but they will be limited to the same niche groups of cypherpunks who cared about these things before cryptocurrency, and mostly not change the world. An exceptionally good result within this window would look like the same kind of niche that Signal has for communication.
GENERAL: I think my decision to devote more space to AI than to all non-AI-related things combined will look prescient, even if my explicit predictions are wrong.