This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial subreddit, Discord, and bulletin board, and in-person meetups around the world. 95% of content is free, but for the remaining 5% you can subscribe here. Also:

1: Thanks to everyone who entered the Prediction Contest; entry is now closed. You can continue to make predictions on Manifold or Metaculus, but they won’t officially count. Also, another prediction market, Futuur, has markets up for the contest questions. I’m pretty excited about this, because although Futuur does let you use play money like Manifold, it also offers real money betting (warning: requires crypto and a non-US IP). If you want to make real money bets on contest questions, now you can (and I’ll be seeing how they compare to the play money markets).

2: In case you missed it: Berkeley meetup this Tuesday, special guest Daniel Ingram.

3: Comment of the week: Meropenem fills in more details about the Cadegiani case mentioned in my ivermectin article.

4 : AI alignment org MIRI is trying to build a dataset for training AI systems. They need lots of examples of a very specific type of RPG-style story with careful explanations, and will pay $100 for good first attempts and maybe hire you to produce more. Please see https://intelligence.org/visible/ for more.

5: ACX Grants update: You may remember Lars Doucet from his guest posts on Georgism. Last year, he and Will Jarvis received an ACX Grant to work on land value assessment technology that might make land value taxes more tractable and appealing. They’re happy to announce that this has turned into a startup, ValueBase, which raised $1.6 million in seed funding. Congratulations to Lars, Will, and the ValueBase team for what I think is the second ACX Grants project to become a $1 million + company.

6: Speaking of Lars - I tried to credit Philosophy Bear as someone who had beaten me to writing about the chatbot propaganda apocalypse, but I didn’t realize Lars had also discussed it on his blog - see AI: Markets For Lemons, And The Great Logging Off. I like his post because unlike Bear or my response to Bear, it’s not originally considering the problem through a political lens and so mostly just expects “spam, but worse” - which I think is broadly right, but didn’t emphasize enough earlier.

7: And you can bet on both Lars’ and my predictions about the chatbot propaganda apocalypse on Manifold. For example: