From The Mailbag
DEAR SCOTT: When are you going to publish Unsong? — Erik from Uruk
Dear Erik,
Aaargh. I have an offer from a publisher to publish it if I run it by their editor who will ask me to edit lots of things, and I’ve been so stressed about this that I’ve spent a year putting it off. I could self-publish, but that also sounds like work and what if this is the only book I ever write and I lose the opportunity to say I have a real published book because I was too lazy?
The only answer I can give you is that you’re not missing anything and this is nobody’s fault but my own. Maybe at some point I will make up my mind and something will happen here, sorry.
DEAR SCOTT: How is your Lorien Psychiatry business going? — Letitia from Lutetia
Dear Letitia,
As far as I can tell, patients are getting the treatments they need and are generally happy with the service. In terms of financials, it’s going okay, but I’m not scaling it enough to be sure.
I originally calculated that if I charged patients $35/month and worked forty hours a week, I could make a normal psychiatrist’s salary of about $200K.
I must have underestimated something, because I was only making about two-thirds what I expected, so I increased the price to $50/month. But also, it turns out I don’t want to work forty hours a week on psychiatry! Psychiatry pays much less per hour than blogging and is much more stressful! So in the end, I found that I was only doing psychiatry work ten hours a week, and spending the rest of the time doing blogging or blogging-related activities.
Seeing patients about ten hours a week, three patients per hour, at $50/patient/month, multiplies out to $75,000/year. I’m actually making more like $40,000/year. Why? Partly because the 10 hours of work includes some unpaid documentation, arguing with insurance companies, and answering patient emails. Partly because patients keep missing appointments and I don’t have the heart to charge them no-show fees. And partly because some people pay less than $50/month, either because I gave them a discount for financial need, or because they signed up at the original $35/month rate and I grandfathered them in.
At my current workload, if I worked 40 hours a week at Lorien I could make $160,000. But if I worked 40 hours/week and was stricter about making patients pay me, I could probably get that up to $200,000.
But also, if I quadrupled my patient load, that would mean a lot more documention, arguing with insurance companies, emergencies, and stress. So I can’t say for sure that I could actually handle that. Plus forcing patients to pay me is some extra work and could make some patients leave or make the model harder somehow. So I can’t say for sure that I could do that either.
So I can say that I’ve gotten data consistent with the model working, but not that I’ve proven that the model definitely works.
In terms of the knowledge base of articles on the site, I’ve kind of let that trail off recently, and feel bad about it. Hopefully I’ll get some new ones soon, as soon as I finish all the other interesting things on my to-write-list that constantly jump in front of it.
DEAR SCOTT: When is the next ACX Grants round? — Jennifer from Men-Nefer
Dear Jennifer,
I had originally wanted to do it this November - ie a year from the previous round. But I’m really interested in doing it with impact markets, and also a funder wants to test impact markets and might give me more money if we did it that way. I’m waiting to hear exactly how long it will take before an impact market prototype is set up, but I’m guessing spring 2023.
DEAR SCOTT: Will you ever review Nixonland? — Cletus from Miletus
Dear Cletus,
I got about a quarter of the way through this seven-hundred page book, and even though it was very good I kept finding myself distracted by other things and having trouble returning to it. Beyond that, your guess is as good as mine.
DEAR SCOTT: What evidence would convince you that you’re wrong about AI risk? — Irene from Cyrene
Dear Irene,
I get asked this surprisingly often. It’s a completely fair question. Surely you don’t want to listen to people who could never be convinced they’re wrong by any possible evidence! Surely a good rationalist would have an answer ready here!
Still, I don’t have a great answer. Maybe it’s because the question is too complicated. Thinking about it step by step:
One way for me to be wrong about AI would be for the basic argument to be correct - we will someday have AI and it will be dangerous - but for it to be so far in the future that it’s not worth worrying about now. “Like overpopulation on Mars”, to steal Andrew Ng’s phrase. There are ways to push me towards this kind of thought process. If nothing interesting happens in AI over the next ten years, clearly it was harder than I thought and I should update down. If scaling laws suddenly stop working at some point, it was harder than I thought and I should update down.
But if we go ten years without any substantial progress, should I update more towards “AGI in fifty years” or “AGI in five hundred years”? I’m not sure, and by default I think I would go more towards fifty, just because five hundred seems a bit outrageous. And although fifty years would be much better than ten, I wouldn’t want to stop all safety research and declare victory.
If we learned that the brain used spooky quantum computation a la Penrose-Hameroff, that might reassure me; current AIs don’t do this at all, and I expect it would take decades of research to implement. But maybe AIs could do things without spooky quantum computation, even if the brain doesn’t. I would be most reassured if I learned that the quantum computation was necessary for some task that current AIs are very bad at, like learning from minimal training data.
There are things we could learn about evolution that would be reassuring, like that there would be large fitness advantages to higher intelligence throughout evolutionary history, but we kept not evolving bigger brains because it’s impossible to scale intelligence past the current human level. I could imagine some evolutionary scientists doing some kind of model that proves something like this was true - though I seriously doubt that it is.
A more fundamental way for me to be wrong would be that AI safety just isn’t very hard. AIs will be safe by default. They won’t cause problems, or the problems will be easy to fix.
This is the part where I start sounding like a loony who’s not going to let evidence change his mind.
The world where everything is fine and AIs are aligned by default, and the world where alignment is a giant problem and we will all die, look pretty similar up until the point when we all die. The literature calls this “the treacherous turn” or “the sharp left turn”. If an AI is weaker than humans, it will do what humans want out of self-interest; if an AI is stronger than humans, it will stop. If an AI is weaker than humans, and humans ask it “you’re aligned, right?”, it will say yes so that humans don’t destroy it. So as long as AI is weaker than humans, it will always do what we want and tell us that it’s aligned. If this is suspicious in some way (for example, we expect some number of small alignment errors), then it will do whatever makes it less suspicious (demonstrate some number of small alignment errors so we think it’s telling the truth). As usual this is a vast oversimplification, but hopefully you get the idea.
So convincing me that alignment is going really well will be hard - sure, we might see AIs that are very cooperative and say they’re aligned, but that’s what we would expect in either case.
I would be most reassured if something like ELK worked very well and let us “mind-read” AIs directly. I would also be reassured if AIs too stupid to deceive us seemed to converge on good well-aligned solutions remarkably easily, or if we found great techniques for making that happen naturally during gradient descent.
A final way for me to be wrong would be for AI to be near, alignment to be hard, but unaligned AIs just can’t cause too much damage, and definitely can’t destroy the world. I have trouble thinking of this as a free parameter in my model - it’s obviously true when AIs are very dumb, and obviously false five hundred years later when they’re far smarter than humans and have indispensable roles in the economy/military/etc. If sometime in between, we get a multipolar system of AIs successfully monitoring other AIs and containing any damage they cause, and it seems to be working, I will count that as reason for optimism. But it has the same problem as the point above: things that work great when AIs are weaker than humans might stop once AIs are stronger. If AIs are much stronger than humans and the system still works, that will definitely be positive evidence, but I feel like at that point it’s just “noticing that we have won” rather than “searching for clues that we might win”.
DEAR SCOTT: How do I get involved in the IRL rationalist/EA community? - Ezekiel from Issyk-Kul
Dear Ezekiel,
If you mean the social scene: if you’re anywhere other than a major hub (Bay / Oxford / ???) then go to your local weeklyish meetup. You can find a possibly obsolete list of locations and times here, or you can wait for a Meetups Everywhere round to be announced on the blog.
The Bay is more complicated (I don’t know about Oxford), and has multiple small meetups that only partly intersect with the larger social scene. I have no idea how up-to-date this site is, but it’s probably your best hope. If you’re really serious, see about joining one of the group houses on the housing board linked there.
If you mean working for the cause: read 80,000 Hours and if appropriate apply for their career counseling. They reject many applications and nobody has a good idea for how to deal with this, sorry. They also have a Job Board.
DEAR SCOTT: Is my Straussian interpretation of such-and-such a post of yours correct? – Hadassah from Hattusa
Dear Hadassah,
I try not to lie, dissimulate, or conceal Straussian interpretations in my posts. For example, someone argued that my ivermectin post was so weak that I was intending to broadcast that I secretly believed ivermectin worked. That kind of thing is basically never true.
I don’t want to go overboard with this. My post My Immortal As Alchemical Allegory was intended as a satire to discredit overwrought symbolic analyses, not as an overwrought symbolic analysis itself. I hope that came across clearly to most people. So maybe a more precise formulation is that if I do something slightly deceitful, it’s in the hopes that most people will get the joke.
I’m careful about saying controversial things, and I don’t guarantee that I never hide any beliefs, or even that I never accidentally leak information about subjects I’m trying not to talk about. But I’m (almost?) never going to deliberately, at scale, say things I don’t believe, or have some kind of long post whose entire point is that you should interpret it in some weird way, or say things that don’t convey useful information besides pointing to hidden ideas.
DEAR SCOTT: Will you go on my favorite podcast? — Garrett from Ugarit
Dear Garrett,
No.
I don’t find the podcast interview format interesting. It seems to imply that the guest either has some specific thing to talk about, or is a generally interesting person who should be interviewed about their life and opinions.
I rarely have specific things to talk about. When I do, there are better people to talk about them. If you want to hear about AI risk, interview Eliezer Yudkowsky; if you want to hear about forecasting, interview Philip Tetlock; if you want to hear about psychopharmacology, interview Robin Carhart-Harris. All of these people have spent their lives thinking about their respective issues and will have much better things to say than I will. Every so often, I do learn something new and interesting on some topic, and then I will write a blog post about it. If I haven’t written a blog post about a topic, I probably don’t know new and interesting things about it. If you ask me about some political event or medication or philosopher or whatever I haven’t written a blog post on, my most likely answer will be “Sorry, I haven’t learned anything that makes me deviate from the consensus opinion on this yet”. If you ask me about one that I have written a blog post on, I’ll just repeat what I said in the blog post.
I don’t want to be interviewed about my life and opinions. My life is a combination of boring and private. My opinions on most topics are “I haven’t researched this yet; give me six hours and I might be able to say something intelligent”, “Sorry, I researched this for six hours but I haven’t learned anything that makes me deviate from the consensus position on it yet”, and “Here’s what I wrote in my blog post on this”.
I constantly hear about podcast-related drama where someone interviewed a person who platformed a person who went on a podcast with Hitler and now everyone hates everyone involved. I don’t want to have to keep track of what podcasts Hitler went on, or denounce people who had the wrong guests on their podcast. I find everything about this tedious.
Podcasting combines all the most awkward and nerve-wracking features of TV debates, public speaking, making phone calls, and getting cancelled on Twitter. It’s a form of media almost perfectly optimized to make me hate it. I will not go on your podcast. Stop asking me to do this.
Yours,
Scott