This is the weekly visible open thread. Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is odd-numbered, so be careful. Otherwise, post about anything else you want. Also:

1: The [AI] Alignment Research Center is running the Eliciting Latent Knowledge contest. They’re awarding between $5,000 and $50,000 (and maybe also job offers) to anyone who can come up with clever ways to get an AI to tell the truth in a contrived hard-to-understand fictional scenario involving a diamond theft. The contest is secretly an attempt to get people in the pipeline of learning about ARC’s ideas and seeing if they’re a good fit for alignment research, and as such, ARC says they’re extremely open to dumb questions, requests for clarification, requests to be walked through certain things, etc.

Mark Xu of ARC says he would consider someone a “good fit” for alignment research “if they started out with a relatively technical background, e.g. an undergrad degree in math/cs, but not really having engaged with alignment before” and were able to really understand the question in 10-20 hours and have a plausible answer in another 10.

You can read about the contest here, and you can read Holden Karnofsky’s pitch for doing it (and attempt to summarize the question) here.