Highlights From The Comments On Criticism Of Criticism Of Criticism
1: I said in the original post that I wrote this because I knew someone would write the opposite article (that organizations accept specific criticism in order to fend off paradigmatic criticism), and then later Zvi did write an article kind of like that. He writes:
It is the dream of anyone who writes a post called Criticism of [a] Criticism Contest to then have a sort-of reply called Criticism of Criticism of Criticism.
The only question now is, do I raise to 4?
I [wrote my article the way I did] for several reasons, including (1) a shorter post would have taken a lot longer, (2) when I posted a Tweet instead a central response was ‘why don’t you say exactly what things are wrong here’, (3) any one of them might be an error but if basically every sentence/paragraph is doing the reversal thing you should stop and notice it and generalize it (4) you talk later about how concrete examples are better, so I went for concrete examples, (5) they warn against ‘punching down’ and this is a safe way to do this while ‘punch up’ and not having to do infinite research, (6) when something is the next natural narrative beat that goes both ways, (7) things are next-beats for reasons and I do think it’s fair that most Xs in EA’s place that do this are ‘faking it’ in this sense, (8) somehow people haven’t realized I’m a toon and I did it in large part because it was funny and had paradoxical implications, (9) I also wrote it out because I wanted to better understand exactly what I had unconsciously/automatically noticed.
For 7, notice in particular that the psychiatrists are totally faking it here, they are clearly being almost entirely performative and you could cross out every reference to psychiatry and write another profession and you’d find the same talks at a different conference. If someone decided not to understand this and said things like ‘what specific things here aren’t criticizing [X]’, you’d need to do a close reading of some kind until people saw it, or come up with another better option.
Also note that you can (A) do the thing they’re doing at the conference, (B) do the thing where you get into some holy war and start a fight or (C) you can actually question psychiatry in general (correctly or otherwise) but if you do that at the conference people will mostly look at you funny and find a way to ignore you.
2: The anonymous original reviewer of Anti-Politics Machine wrote:
I have a lot of thoughts on this that I do not have time to write up properly, but I do think you’re kind of missing the point of this kind of critique. “The Anti-Politics Machine” is standard reading in grad-level development economics (I’ve now had it assigned twice for courses) – not because we all believe “development economics is bad” or “to figure out how to respond to the critiques” but because fighting poverty / disease is really hard and understanding ways people have failed in the past is necessary to avoid the same mistakes in the future. So we’re aware of the skulls, but it still takes active effort to avoid them and regularly people don’t. My review gave a handful of ideas to change systems based on this critique, but in a much more fundamental way these critiques shape and change the many individual choices it takes to run an EA or development-style intervention.
RCTs are a hugely powerful tool for studying charitable interventions, for all the reasons you already know. But when you first get started, it’s really easy to mistake “the results of an RCT” for “the entire relevant truth”, which is the sort of mistake that can massively ruin lives (or waste hundreds of millions of dollars) if you have the power to make decisions but not the experience to know how to interpret the relevant evidence. I wrote the review not to talk people out of EA (I like EA and am involved in an RCT I think will really help add to our knowledge of how to do good!) but because I think being aware of this kind of shortcoming and when to look out for it is necessary to put the results of RCTs in context and use them in a way that’s more responsible than either “just go off vibes” or “use only numerical quantitative information and nothing else”.
To say this more clearly — I think the criticisms of EA you’re writing about already are the sorts of critique you want, but you’re a little too removed from development work to see how they’re operationalized.
And added:
Update: after reading Ivo’s comment below, I reread Scott’s argument and think my initial reading was somewhat defensive.
I think part of the problem here is that a lot of general criticisms are based on this sort of specific failing, but get expressed in general terms because you can’t expect a general audience to be familiar with the specifics of 1970s World Bank initiatives in Lesotho. So in some sense I think my review was motivated by a less fleshed out version of Scott’s take here — a desire for people to know an example of the sort of specific failure the general critiques (at least those coming from inside dev Econ) have in mind.
From the inside, a lot of these critiques come with a lot of context (eg examples of what we mean by problems with individualism in development or the need for taking cultural elements into account) that are well understood by the people making the claims but hard to communicate in a forum post (“read these eleven books” is not a good communication method). So I think there are two conversations going on — people with field-specific expertise talking to one another in ways that are clear to them, and outsiders (EAs without firsthand experience in the dev field) trying to make sense of them without the assumed background context. (A lot of these arguments seemed dumb to me until I started taking grad level development courses and built up more of the assumed background knowledge.) I’m not sure what the solution here is, because it seems like making these arguments in the way Scott is asking for (so that outsiders have all the context necessary to know what’s being asked for / critiqued) would extend these from forum posts to several-hundred-page technical books.
I think all of this is fair and agree with all of it.
3: A surprising amount of discussion focused around the perihelion of Mercury example in particular! For example, archpawn writes (my emphasis):
“It’s insufficiently elegant” was how Einstein figured out the true theory. “Its estimate for the precession of the orbit of Mercury is off by forty arc-seconds per century” is just how Einstein was able to convince other scientists. Of course, outside of math and physics, looking for elegance won’t get you very far.
Dirichlet-to-Neumann writes:
The Michelson-Morley* experiment from the 1880’s also showed there were problems with classical Newtonian dynamic (and there was the problem of the incompatibility between Maxwell’s equations and Newton’s paradigm on the theoretical side).
This would have been enough for at least special relativity but once you get special relativity, general relativity is the logical next step in your investigations. I don’t think the “no Mercury anomaly timeline changes by much.
*Michelson-Morley experiment wanted to measure the speed of light using one of the coolest measuring tool ever, an interferometer. They got a problem because the speed of light did not change when the interferometer was moving.
Chaostician writes:
The perihelion drift of Mercury was a neat problem that relativity solved, but it was not a major motivating factor for Einstein. It had an explanation within the existing paradigm: when there’s a surprising perihelion drift, there’s probably another planet out there. That’s how we predicted Neptune’s existence. Astronomers thought there was another planet closer to the sun than Mercury, which they named Vulcan.
There were other experimental problems too. The Michelson-Morley experiment was supposed how fast the Earth was moving relative to the ether. It measured that the Earth was not moving relative to the ether at all. At the very least, the experiment should have been able to see Earth’s orbital motion, which points in a different direction at different times of the year.
This isn’t even the worst prediction of the old paradigm*: the Blazing Sky Paradox / Olbers’ Paradox. The universe was thought to be infinitely large and infinitely old and that matter is approximately uniformly distributed at the largest scales (Copernican Principle). Any line of sight should eventually hit a star. Work out the math and the entire sky should be as bright as a sun all the time. This contradicts our observation that the sky is dark at night. This paradox was eventually resolved by accepting that the age of the universe is finite, as described by Lemaitre’s and Hubble’s Big Bang Theory.
If we read what Einstein wrote, none of these failed predictions actually motivated Einstein to propose relativity. He instead cared more about questions like: What would it be like to chase a light wave? The electric and magnetic fields wouldn’t be changing, so they shouldn’t be creating each other, so the light wave wouldn’t exist. That’s ridiculous. So we’d better completely change our notions of space and time to make sure that this can’t happen. Einstein’s arguments actually are this audacious.
Einstein worked primarily through thought experiments. He would find experimental results afterwards to make his arguments more persuasive to other physicists. Even then, explaining a few obscure existing anomalies wasn’t enough to convince most physicists to change their notions of space & time. He had to make new predictions. Which he did: the path of light going by the sun is bent by it’s gravity. Eddington’s expedition to see a solar eclipse confirmed this, and caused the paradigm shift to spread through the entire community.
There was also some discussion of paradigms that seemed opposed at first getting synthesized - did you know that in the 19th century, people thought that Darwinian evolution was incompatible with Mendelian genetics? From Kenny Easwaran:
Consider the way that modern biology is said to be Darwinian even though in the late 19th and early 20th century, Mendelian genetics with its discrete units of heredity from a “gene pool” was thought to be this one idea that Darwinian theory could never accommodate, with its demand for constantly varying traits that bring species completely outside what they had been […]
To a Neo-Darwinian, we don’t think of Mendelism and Darwinism as competitors, but they clearly were at the time - Darwin said traits had continuous variation around the traits of the parents, so that small differences can accumulate; Mendel said traits had binary variation, so that the only differences possible were those already in the gene pool. Once we understood that most traits were controlled by many genes, and that there are rare mutations in any, we were able to synthesize these.
4: Kronopath writes:
If your point is that specific, well-researched criticism is harder than general, paradigmatic criticism, I agree. I think that’s why you tend to see more of the latter, though much of it is low-quality.
If your point is that paradigmatic criticism (or this specific paradigmatic criticism) is without value, I strongly and specifically disagree.
I admittedly haven’t read any of the other entries, but I would be happy to see Zvi win (at least some of the prize pool of) this contest. I briefly considered entering this contest, but was put off for the same reasons he expresses in his post.
To distill what he’s trying to say: Imagine if the Catholic Church had an essay-writing contest asking to point out the Church’s sins. But then, in the fine print, they strongly implied that they will be judging what is a sin based on the teachings of Jesus Christ, and that it would be judged by a select group of Cardinals. That would drive away anyone trying to point out cases where their interpretations of Jesus’s teachings might be wrong, or where the teachings of Jesus don’t work on a fundamental level.
This is the same deal. The criticism contest asks for criticism, but then implies that it’s going to be judged within EA’s interpretation of utilitarianism, thus pushing away any potential criticism of the fundamentals.
Yeah, this is good, and now I’m wondering if I missed a more fundamental (less fundamental?) issue here.
Going back to the church example: suppose I’m a new pastor, I’m running my church for the first time, and I tell everyone I’m looking for criticism. Maybe I’m hoping to hear something like “you sing the hymns off-key” or “you speak too softly when you give your sermons, nobody can hear you” or “there’s never enough food for everyone at the church picnic”.
If someone instead tells me “Religion is fake and there is no God”, am I allowed to tell her I was looking for a different kind of criticism?
I agree that if God doesn’t exist, pastors should want to know this. But I also think it’s fair for pastors to say that currently the type of criticism they’re most interested in hearing is about how to run their church effectively, and they either have already decided the God question to their satisfaction, or will deal with it later in some other context.
I don’t think a pastor who asked for criticism/suggestions/advice about basic church-running stuff, but rolled his eyes when someone tried to convince him that there was no God, is doing anything wrong, as long as he specifies beforehand that this is what he wants, and has thought through the question of God’s existence at some point before in some responsible way.
So maybe my thoughts on the actual EA criticism contest are something like “I haven’t checked exactly what things they do or don’t want criticism of, but I’m prepared to be basically fine if they want criticism of some stuff but not others”.
5: Yuliaverse (writes Yuliaverse) writes:
So far, I’ve assumed that many EA-aligned people welcome criticism because of a culture of (performative) openmindedness. I think the point this essay makes is better, though.
If criticism is vague enough, no one feels personally attacked – it’s easier to nod along. You can feel productive and enlightened while changing nothing. I’m not sure if that’s everything there is to it. What I am convinced of now is that being specific when criticising is valuable. Suddently whatever you’re talking about becomes tractable.
I want to stress that I wasn’t making the point “EA is only faking wanting criticism, they actually just solicit paradigmatic criticism to escape specific criticism” - ie the exact inverse of the point I thought some people were making where “EA is only faking wanting criticism, they actually just solicit specific criticism to avoid paradigmatic criticism”.
I think that would be continuing to reason through narrative beats instead of through evidence. Once you talk about how an organization really likes criticism, you have to argue that this is just a fakeout and they really like [nonthreatening type of criticism] instead of [real criticism].
I think some organizations are actually just good, and genuinely solicit criticism because they want to know how to be better. This is what I would do, if I had a big organization. I know cynicism is popular now and we’re supposed to pretend that everyone is bad in every possible way, but my impression is that EA would really like to know if it’s doing something wrong.
(this isn’t to say they will necessarily agree with anyone else’s assessment, any more than the pastor is required to agree that there is no God).
6: Remmelt, who wrote the criticism post I called too paradigmatic in the original essay, writes:
Actually, [that post] was an attempt at clarifying common attentional/perception blindspots I had mapped out for groups in the community over the preceding two years. Part of that was illustrating how Glen Weyl might be thinking differently than thought leaders in the community.
But actually I was conversationally explaining a tool that people could use to map attentional/perceptual blindspots.
Try looking at the post (forum.effectivealtruism.org/posts/LJwGdex4nn76iA8xy/some-blindspots-in-rationality-and-effective-altruism) and piecing together:
- the I-IV. labelled psychological distances throughout the post (where distances represented both over past and future from the reference point respectively of {now, here, this, my}),
- along with approach vs. avoid inclination (eg. embody rich nuances from impoverished pigeonholes vs. decouple from the messy noise to elegant order)
- and corresponding regulatory focus over structure vs. process-based representations.
One thing I find a little frustrating about Scott’s selective depictions of the blindspots piece is that Scott seems to interpreting the claims made as being vague (definitely true in some cases) and as some kind of low-information signalling to others in the community to do the thing that is already commonly promoted as socially acceptable/good (mostly not true; I do think I was engaging in some signalling both in feel-good-relate-with-diverse-humans-stuff and in promote-my-own-intellectual-work-stuff but I felt quite some tension around posting this piece in the community; Scott’s response on individualism speaks for itself).
Whereas the perceptual and motivational distinctions I was trying to clarify are actually specific, somewhat internally consistent within the model I ended up developing, and took a lot of background research (core insights from dozens of papers) and many feedback rounds and revisions to get at.
Note also that I had not had a conversation with Glen when I wrote the post. In our first call, Glen said that the post roughly resonated for him (my paraphrase), but that he also thought it overlooked how like EA/rationality concepts. Eg. he said that Hindu religious conceptions can also be very far in psychological distance and abstraction, meaning there is diversity of human culture and thought that the blindspots post did not represent much.
7: Alex writes:
I feel like the example of paradigmatic criticism given in the article about how do we know we know reality is real or that capitalism is good is a bit of a straw man. I’ve always thought paradigmatic criticism of EA work was more more points like:
-Giving in the developing world, as EA work often recommends, is often used as a political tool that props up violent and/or corrupt governments, or has other negative impacts that are not easily visible to foreign donors
-This type of giving also reflects the foreign giver’s priorities, not the recipient’s
-This type of giving also strangles local attempts to do the same work and creates an unsustainable dependence on outsiders
-The EA movement is obsessed with imaginary or hypothetical problems, like the suffering of wild animals or AIs, or existential AI risk, and prioritizes them over real and existing problems
-The EA movement is based on the false premise that its outcomes can in fact be clearly measured and optimized, when it is trying to solve huge, complex, multi-factorial social issues
-The EA movement consists of newcomers to charity work who reject the experience of seasoned veterans in the space
-The EA movement creates suffering by making people feel that not acting in a fully EA-endorsed manner is morally bad.
This is the kind of criticism I would consider paradigmatic and potentially valid but also not, as far as I can tell, really embraced by EAs.
I agree that to some degree these are good paradigmatic criticisms.
One thing I get out of this is that all of these have been discussed a zillion times before, and if you still support EA it’s because you disagree with them for some reason or have some argument against them. Maybe one advantage of specific vs. broad criticism is that specific criticisms are more likely to be new, simply because there are so many details you can criticize? Or that they’re more likely to get debated rather than just create in-the-movement vs. out-of-the-movement fault lines, since you can’t schism your movement over every tiny detail?
8: Matthew Carlin writes:
I am not and have never been a Ba’hai, but it’s worth sharing two elements of the Ba’hai faith. Note that these are ideals, not necessarily practiced by the average adherent.
1) The Ba’hai believe that Abrahamic religion is still evolving, and has not yet reached its perfect form. So Judaism, Christianity, Islam, and maybe even the Ba’hai faith itself, are true, but incomplete. In this sense the religion very much shares a good quality with EA: it is not settled, may never be settled, and is open to change and improvement.
2) The Ba’hai prophet Baha’u’llah (the principle prophet, more or less like Muhammad) explicitly instructed his followers not to proselytize or force their faith on others. In practice, this is often ignored, but it is also often followed; my two Ba’hai friends did not tell me their religion, and when I found out and asked them about it, each one expressed reluctance to tell me unless I was really interested from my own will.
When I learned this latter property, it was perhaps the single most refreshing thing I ever learned about a religion. I can’t begin to tell you how refreshing it felt.
From that followed a frame of reference where I asked “would this thing be better without evangelism?”, and for me the answer was so often and so completely “yes” that I eventually stopped asking, with one exception I’ll treat at the end.
I have come to believe that people resist change in direct proportion to the magnitude of the push for change; hard push leads to hard heart. Moreover, while some people do yield and change and convert, as a side effect both the converter and the converted are left with a serious intensity, maybe permanently. Furthermore most “we meant well but did evil” mistakes seem to come from this place of intensity.
So when I ask myself whether something would be better without evangelism, what I get back is basically always a form of this:
1) It would take 10-1000 times longer to complete the project.
2) Nobody would feel coerced into it.
3) Nobody will choose to hurt somebody to get the project done sooner.
4) Nobody will have strong feelings to continue “fighting” after the project reaches a natural conclusion.
Basically always the right call.
Aren’t you evangelizing non-evangelism here? I’m so angry about you cramming your non-evangelism down my throat that I’m going to evangelize one million times harder from now on, just to spite you!
This is actually seriously my point: there’s no clear line between expressing an opinion and evangelizing. Was I evangelizing about my view of criticism when I wrote the original blog post? But surely it’s my blog, you’re under no obligation to read it, and if I have good ideas I should be allowed to write about them. Heck, some of you guys pay me to write my interesting thoughts; that suggests they’re providing you with value.
Last month, when I posted why I thought that a recent crime wave was because of anti-police protests, was that evanglizing? If I were Bahai, and I posted something about why I thought the Bahai religion was true, using the same style of argument, would that be evangelizing? If I posted my interesting thoughts about charity in 2005, and those happened to be the exact same thoughts that later became effective altruism, would I have been evangelizing then? If I write a post now about why you should all join EA, would that be evangelizing now?
I don’t want to deny that there’s a real thing called evangelism - I hear EA has been doing some pretty organized recruitment work in colleges. But most of you aren’t college students. If you’ve heard about EA, it’s been on blogs like this one. So what are you complaining about?
I think there’s something where whenever a philosophy makes unusual moral demands on people (eg vegetarianism), then talking about it at all gets people accused of “evangelizing”, in a way unlike just talking about abortion or taxes or communism or which kinds of music are better than which other kinds of music. I think people feel threatened and offended, and instead of interpreting that as a statement about their own feelings (“I feel threatened by this idea”), they interpret it as a failure in their interlocutor (“This person is doing inappropriately pushy evangelizing”)
But one other point I want to make here: it seems relevant that Bahai is false. Or, sorry, you can’t just call someone else’s religion false, but we’re not expecting it to be true in some relevant sense of the word “true”. Suppose Bahai had a bit more content than we usually bargain for in a religion - suppose its most important tenet was “in a bunker somewhere beneath the western suburbs of Chicago, there is a mad scientist working on a superplague which he will release on September 1, he must be found and stopped before that date”. And suppose you have lots of strong evidence this is true - for example, you infiltrated the bunker, met the scientist, and escaped. Now you’re going around alerting the authorities - or perhaps it’s too late for that, and you’re trying to convince regular citizens to flee. Is this evangelism? Are you wrong to do it?
But what if you believe that God is real, really real, and people might go to Hell if they don’t believe in Him? Isn’t that news as important to spread as the news of a mad scientist with a superplague?
Or what if you are the first person to consider abolitionism? Is the right course to just keep quiet about it, and hope maybe someone asks you “Hey, what are your thoughts on this thing where we all have slaves?” Or should you get aggressive about it? Sure, that will make some people angry, but better everyone knows about you (and half of people hate you) than nobody knows about you at all (and so you lose by default).
Of course, this is all overly simplistic: the trick is evangelizing without making people hate you. I’ve worked on this skill for many years, and the best solution I’ve come up with is talking about a bunch of things so nobody feels too lectured to about any particular issue.
Speaking of which, can I convince you to use prediction markets?