Open Thread 298
This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial subreddit, Discord, and bulletin board, and in-person meetups around the world. 95% of content is free, but for the remaining 5% you can subscribe here. Also:
1: Sorry, I scheduled the Berkeley fall ACX meetup for next Saturday, but my schedule has changed and I won’t be able to go. Meetups Czar Skyler and lots of other great people will still be there, so have fun without me!
2: One of the recent impact market forecasting projects, OPTIC, asks me to broadcast the following appeal:
OPTIC is announcing intercollegiate forecasting tournaments in SF, DC, and Boston. Think 1-day hackathon/olympiad/debate tournament, but for forecasting the future — teams predict on topics ranging from geopolitics to celebrity twitter patterns to financial asset prices, and the best forecasters get thousands of dollars in cash prizes and exclusive internships at Metaculus.
Day of, teams give probabilistic predictions for a couple hours on about 30 given questions (with breaks for lunch & speakers). Teams are 3-5 competitors, and we’ll place you on a team if you don’t already have one in mind. A few months after the tournament, all questions resolve and winners are announced/awarded: until then, you can track how your team is doing in real time.
Tournaments will be run in the Bay Area (November 4), DC (November 18) and Boston (December 2). Registerhere (3-7 min)!
You can look over our previously used questions, check out our FAQ for more details, and always feel free to reach out!
3: Updates on the AI pause debate:
-
Holly Elmore and PauseAI are holding pro-pause protests October 21 in eight cities around the world, including San Francisco.
-
Quintin Pope has some good on X, including a debate with Liron Shapira and this explanation of where he parts ways with older AI safety paradigms.
-
Evan Hubinger argues that Responsible Scaling Policies Are Pauses Done Right
-
The Centre for the Governance of AI has a paper on Coordinated Pausing: An Evaluation-Based Coordination Scheme for Frontier AI Developers
4: Steve Hsu asks me to link his appeal for why you should support the Study Of Mathematically Precocious Youth; go here to donate.
5: The New York Times recently published an article about the Manifest prediction market conference. I think it’s overall very good, and appreciate the care that the reporter put in to understanding the ideas (plus the frankly majestic picture of the Manifold co-founders). I do want to correct one paragraph, though:
The Rationalist revival has put wind into the sails of start-ups like Manifold Markets, which was initially funded by a grant program run by Astral Codex Ten, a Rationalist blog that has promoted prediction markets. (It also received $1 million from the FTX Future Fund, the philanthropic arm of the bankrupt crypto exchange whose founder, Sam Bankman-Fried, is a fan of prediction markets.)
I think a natural reading of this sentence is that Astral Codex Ten received $1 million from the FTX Future Fund. Some people who read the article said they understood it this way and thought I took FTX money. I didn’t. The article meant to say that Manifold did.
I appreciate NYT moving from its previous policy of blatant and deliberate falsehoods about me, to a newer, kinder policy of accidental and ambiguous falsehoods about me. That’s the first step towards not publishing any falsehoods about me at all! Still, I want to set the record straight.
A related NYT podcast also discussed Manifest and prediction markets; see here for partial transcript.