I have an article summarizing attempts to forecast AI progress, including a five year check-in on the predictions in Grace et al (2017). It’s not here, it’s at asteriskmag.com, a rationalist / effective altruist magazine: Through A Glass Darkly. This is their AI issue (it’s not always so AI focused). Other stories include:

  • Crash Testing GPT-4 : Before releasing GPT-4, OpenAI sent a preliminary version to the Alignment Research Center to test it for unsafe capabilities; the detail that made the news was how the AI managed to hire a gig worker to solve CAPTCHAs for it by pretending to be a blind person. Asterisk interviews Beth Barnes, leader of the team that ran those tests.

  • What We Get Wrong About AI And China : Professor Jeffrey Ding discusses the Chinese AI situation. If I’m understanding right, China is 1-2 years behind the US, but that this number underplays the size of the gap, and if the US stopped innovating today, China wouldn’t necessarily push ahead in 3 years. Today’s Marginal Revolution links included a claim that a new Chinese model beats GPT-4; I’m very skeptical and waiting to hear more.

  • The Transistor Cliff : Sarah Constantin on the future of microchips. Most predictions about the future of AI center around the idea that lower compute costs → bigger training runs → smarter models. But how sure are we that we can keep decreasing compute costs indefinitely? Will we reach physical limits or memory bottlenecks? What if we do?

  • A Debate About AI And Explosive Growth : Tamay Besiroglu vs. Matt Clancy. Will AI be just another invention that is probably good for the economy but leaves GDP trajectories overall unchanged? Or will it create a technoeconomic singularity leading to “impossibly” fast economic growth? A good followup for my recent Davidson On Takeoff Speeds. I don’t think they emphasized enough the claim that the natural trajectory of growth had long been trending towards a singularity in the 2020s, we only started deviating from that natural trajectory since ~1960 or so, and that we’re just debating whether AI will restore the natural curve rather than whether it will do some bizarre unprecedented thing that we should have a high prior against.

Plus superforecaster Jonathan Mann on whether AI will take tech jobs, Kelsey Piper on the different camps within AI safety, Michael Gordin on how long until Armageddon (surprisingly not AI related!), Robert Long on what the history of debating animal intelligence tells us about AI intelligence, Avital Balwit on the technical aspects of regulating AI compute, Carl Robichaud on how we (sort of) succeeded at nuclear non-proliferation, and Jamie Wahls’ short story about chatbot romance.

Congratulations again to Clara, Jake, and the rest of the Asterisk team! As always, you can subscribe here.