I.

I am not defending technocracy.

Nobody ever defends technocracy. It’s like “elitism” or “statism”. There is no Statist Party. Nobody holds rallies demanding more statism. There is no Citizens for Statism Facebook page with thousands of likes and followers. Yet for some reason libertarians don’t win every single national election. Strange, isn’t it?

Maybe it’s one of those Russell conjugations - “I am firm, you are obstinate”. I support rule of law, you’re a statist. I want checks and balances on mob rule, you’re an elitist. I like evidence-based policy, you’re a technocrat.

I am not defending technocracy. But I do like evidence-based policy. So I read with interest Glen Weyl’s Why I Am Not A Technocrat. It starts with a short summary of Seeing Like A State. It ties this into modern “evidence-based policy” and “mechanism design”. It talks about how technocrats will always have their own insular culture and biases and paradigms, which prevent them from seeing the real world in its full complexity. Therefore, we should be careful about supposedly “objective” policies, and make sure they are always heavily informed by real people’s real knowledge. Then it draws on vague rumors of the “rationalist community” and a shadowy figure named “Eliezer Yudkowsky” to create a completely fictional reimagination of us as a group of benighted people who don’t understand any of these things, and just go around saying “hurr durr top-down systems are great, no way there could possibly be anything our models don’t capture.”

I like Seeing Like A State as much as anyone else (though see the caveats in Part VII of my review for some criticisms). But it worries me that everyone analyzes the exact same three examples of the failures of top-down planning: Soviet collective farms, Brasilia, and Robert Moses. I’d like to propose some other case studies:

1. Mandatory vaccinations: Technocrats used complicated mathematical models to determine that mass vaccination would create a “herd immunity” to disease. Certain that their models were “objectively” correct and so could not possibly be flawed, these elites decided to force vaccines on a hostile population. Despite popular protest (did you know that in 1800s England, anti-smallpox-vaccine rallies attracted tens of thousands of demonstrators?), these technocrats continued to want to “arrogantly remake the world in their image,” and pushed ahead with their plan, ignoring normal citizens’ warnings that their policies might have unintended consequences, like causing autism.

2. School desegregation: Nine unelected experts with Harvard and Yale degrees, using a bunch of Latin terms like a certiori and de facto that ordinary people could not understand let alone criticize, decided to completely upend the traditional education system of thousands of small communities to make it better conform to some rules written in a two-hundred-year-old document. The communities themselves opposed it strongly enough to offer violent resistance, but the technocrats steamrolled over all objections and sent in the National Guard to enforce their orders.

3. The interstate highway system: 1950s army bureaucrats with a Prussia fetish decided America needed its own equivalent of the Reichsautobahn. The federal government came up with a Robert-Moses-like plan to spend $114 billion over several decades to build a rectangular grid of numbered giant roads all up and down the country, literally paving over whatever was there before, all according to pre-agreed federal standards. The public had so little say in the process that they started hundreds of freeway revolts trying to organize to prevent freeways from being built through their cities; the government crushed these when it could, and relocated the freeways to less politically influential areas when it couldn’t.

4. Climate change: In the second half of the 20th century, scientists determined that carbon dioxide emissions were raising global temperatures, with potentially catastrophic consequences. Climatologists created complicated formal models to determine how quickly global temperatures might rise, and economists designed clever from-first-principle mechanisms that could reduce emissions, like cap-and-trade systems and carbon taxes. But these people were members of the elite toying with equations that could not possibly include all the relevant factors, and who were vulnerable to their elite biases. So the United States decided to leave the decision up to democratic mechanisms, which allowed people to contribute “outside-the-system” insights like “Actually global warming is fake and it’s all a Chinese plot”.

5. Coronavirus lockdowns: The government appointed a set of supposedly infallible scientist-priests to determine when people were or weren’t allowed to engage in normal economic activity. The scientist-priests, who knew nothing about the complex set of factors that make one person decide to go to a rock festival and another to a bar, decided that vast swathes of economic activity they didn’t understand must stop. The ordinary people affected tried to engage in the usual mechanisms of democracy, like complaining, holding protests, and plotting to kidnap their governors - but the scientist-priests, certain that their analyses were “objective” and “fact-based”, thought ordinary people couldn’t possibly be smart enough to challenge them, and so refused to budge.

Nobody uses the word “technocrat” except when they’re criticizing something. So “technocracy” accretes this entire language around it - unintended consequences, the perils of supposed “objectivity”, the biases inherent in elite paradigms. And then when you describe something using this language, it’s like “Oh, of course that’s going to fail - everything like that has always failed before!”

But if you accept that “technocracy” describes things other than Soviet farming, Brasilia, and Robert Moses, the trick stops working. You notice a lot of things you could describe using the same vocabulary were good decisions that went well. Then you have to ask yourself: is Seeing Like A State the definitive proof that technocratic schemes never work? Or is it a compendium of rare man-bites-dog style cases, interesting precisely because of how unusual they are?

I want to make it really clear that I’m not saying that technocracy is good and democracy is bad. I’m saying that this is actually a hard problem. It’s not a morality play, where you tell ghost stories about scary High Modernists, point vaguely in the direction of Brasilia, say some platitudes about how no system can ever be truly unbiased, and then your work is done. There are actually a bunch of complicated reasons why formal expertise might be more useful in some situations, and local knowledge might be more useful in others.

II.

Weyl starts by defining technocracy:

By technocracy, I mean the view that most of governance and policy should be left to some type of “experts”, distinguished by meritocratically-evaluated training in formal methods used to “optimize” social outcomes…perhaps the most prominent version, especially in democratic countries, is a belief in a technocracy based on a mixture of analytic philosophy, economic theory, computational power and high-volume statistical analysis, often using experimentation. This form of technocracy is a widely held view among much of the academic and high technology elites, among the most powerful groups in the world today. I focus on this tendency as I assume it will be the form of technocracy most familiar and attractive to my readers

Did you notice none of Weyl’s examples of technocracy fit this definition at all? Robert Moses had zero formal training in urban planning or anything related to city-building. The Soviet leadership wasn’t “meritocratically chosen”. And Oscar Niemeyer didn’t construct a High Modernist planned village and a control village, test which one performed better on various metrics, and scale the winner up into Brasilia.

(I’m not saying it would necessarily have gone well if he did - maybe the principles don’t scale, or maybe he would have chosen the wrong metrics, or maybe any of a thousand other things could have gone wrong. But it would have been better than what he actually tried , which was to take some weird aesthetic choices by Le Corbusier and follow them off a cliff. Weyl takes exactly the thing Niemeyer didn’t do, then uses the failure of his project to argue that doing it is bad!)

I think Weyl’s idea of “technocracy” is incoherent because he’s trying to combine several different axes into one word. Some of these might be:

1. Top-down intervention vs. bottom-up evolution. Did a system evolve on its own in a emergent way? Or is some planner coming up with the shape beforehand and ordering it to be enacted? To some degree bottom-up evolution makes the other axes irrelevant - “technocracy” and “democracy” are both forms of government, and if the government’s not intervening then there’s no question about who decides what form the intervention takes.

2. Mechanism vs. judgment. Is the system determined by some sort of algorithmic mechanism, like “We’ll enact whichever plan ends up with the highest price in our prediction market” / “Our school will admit whoever scores highest on the admission exam”? Or does it rely on human judgment, ie “We’ll enact whichever plan our assembly reaches consensus around” / “Our school will admit whoever holistically seems to match our values the best”?

3. Autarchy vs. democracy. This is obviously a spectrum, since even the most totalitarian dictator has a few advisors, and even the most democratic country still has presidents and prime ministers. But it seems fair to say Robert Moses was less democratic than some more touchy-feely city planner, so sure.

4. Expert opinion vs. popular opinion. Again, obviously a spectrum. There are subquestions here where nobody disagrees - shouldn’t there be some doctors and epidemiologists at the FDA and WHO? But I think it’s fair to say that there’s a distinction between a country where doctors and epidemiologists can unilaterally ban cigarettes vs. one where that decision is made at a more democratic level.

5. Victims ignored vs. victims consulted. This is slightly different from the expert vs. popular axis, in that a plan to regulate farmers might be popular among the masses but not incorporate advice from or opinions from farmers themselves. Weyl makes a big deal about whether planners seek democratic feedback, but I am suspicious of how much this really varies. Suppose you’re the FDA trying to regulate cigarettes, and your doctors and bureaucrats have come up with a plan. What do you do now to “seek democratic feedback”? Put a suggestion box outside the FDA office? Invite randomly selected smokers to FDA headquarters to tell you what they think? Just assume that, since elected officials appointed the FDA director, you’re okay?

These axes prove their use when we use them to analyze Weyl’s vs. Scott’s conceptions of “legibility”. The two use the word in exact opposite ways; Weyl is interested in whether technocrats’ plans are legible to the populace; Scott is interested in whether the populace’s activities are legible to technocrats. Weyl claims that technocrats’ plans are hard for the masses to understand. Scott thinks exactly the opposite: when Oscar Niemeyer says “let’s build giant apartment buildings separated by a rectangular grid of roads”, he can easily list the advantages - short commutes, speedy traffic, impossible to get lost. It’s only the disadvantages - “it wouldn’t feel like home “, “I kind of like being in crowds sometimes” - that are hard to express. The advantages of collective farming are obvious: specialize in one crop, gains from working together instead of competing with each other, use the most advanced industrial farming techniques. The disadvantages are so arcane that you would have to be a farmer yourself with complicated agricultural craft to understand them. The technocrats can always point to X% GDP increases, Y extra jobs, and the like; the masses are stuck saying “It doesn’t feel right”.

I think what’s going on here is that Scott is almost uniquely focusing on the bottom-up vs. top-down axis; he thinks top-down plans are legible and bottom-up plans aren’t, and he calls the former “technocracy”. Weyl doesn’t care at all about this axis; he seems to be assuming top-down intervention will happen, and talking about what kind of top-down intervention (in terms of the other four axes) we’re going to get. As a result, the two of them talk past each other in a bizarre way.

In another case, Weyl critiques neoreaction while accidentally parroting its talking points; this foundational neoreactionary essay is spookily similar to Weyl’s own. This is because both Weyl and the neoreactionaries agree profoundly on taking the judgment side of the judgment vs. mechanism axis, but disagree on the autarchy vs. democracy one.

I find Axes 1, 3, 4, and 5 kind of boring once we take the time to decompose them. Everybody’s already argued the merits of government intervention vs. libertarianism, of populism vs. elitism, etc. Weyl seems to have a special interest in Axis 2 - mechanism vs. human judgment - and I think this is the most interesting potential point of disagreement.

III.

From Weyl:

In short, we are very far from discovering formalisms capable of capturing and quantifying most of the critical inputs to policy and systems design for a decent society. So much of what we still need lives in e.g. the low-income housing developments, the lived experiences of workers facing powerful corporations, the NGOs on the ground in Myanmar, and the community educational justice groups. To the extent that technocracy is a practice of insulating policy makers and system designers from the need to justify themselves in the language of, clearly explain their designs to and maintain open lines of communication from these highly informative channels, it leads to large-scale failures, corruption, crises and justified political backlash and outrage.

Why would anyone ever want mechanism? Why would we want to use formalisms? Human decision-making is so versatile and so good at taking account of outside-the-system problems that limiting ourselves to mechanical models would pointlessly cripple us, right?

I’m a fan of doing things formally. My answer to the above challenge is: mechanism is constraining on purpose. It’s constraining in the same sense that tying yourself to the mast so that Sirens don’t lure you to a watery doom is constraining. Mathematical formalism is a trick for securing a system against bias and corruption. Let me give four examples:

1. Mechanical district creation: Mathematicians have developed various ways of automatically creating Congressional districts, usually something like “tile the state with compact polygons”. Maybe this is inferior to having wise people who truly understand the state and its complex needs draw districts that group naturally-related areas together and make sure everyone has an equal say. But somehow whenever we ask our wise-people-who-truly-understand-the-state to do this, they always come up with weird pipe-cleaner shapes that vote exactly 51% Republican. I admit there are many ways to solve this besides tiling the state with compact polygons. But if we can’t make any of the other ones work, tiling the state with compact polygons would beat how we do things now.

2. Aptitude-test-based admissions to colleges: This is how most countries other than the US do things. But 1920s deans realized this let in too many Jews, so they changed it to a holistic admissions process where wise representatives of our cultural values holistically scanned the good and bad aspects of every applicant. In the past, this system was a fig leaf for excluding Jews; today, it’s a fig leaf for excluding Asians, and for letting people whose parents donate lots of money get in through the back door. This isn’t just true for colleges - we know that giving everyone IQ tests and letting the top scorers into gifted programs ends up with better representation of gifted minorities than letting teachers use their judgment. One of the most infuriating parts of Weyl’s essay is where he talks about how technocracy is bad because it can incorporate subtly racist assumptions into its equations - as if asking random people to make subjective decisions is safer from that failure mode!

3. Housing: I live in the San Francisco Bay Area, where digging up Robert Moses’ corpse and appointing it Perpetual Planning Czar would be way better than what we have now. It turns out when you carefully seek democratic feedback on planning decisions, the feedback is usually “build absolutely nothing anywhere near anything”, nothing ever gets built, and your city ends up in a terrible housing crisis with lots of people being broke, homeless, and miserable. One of the most exciting plans to solve the NIMBY crisis is SB50, a bill that ennumerates mechanical restrictions on how all zoning decisions have to go, and deliberately blocks affected neighbors from having any say in the process.

4. Democracy: For all the supposed opposition of technocracy and democracy, it’s worth noting that democratic elections are an example of mechanism in action. Instead of having to come to a consensus on difficult questions, we just compare two numbers - the number of people who vote for the decision, and the number who vote against it - and whichever number is bigger wins. Can you imagine if instead we had to actually figure out what was good? Or if instead of using numbers, we tried to judge the subjective quality of a coalition - this side has more PhDs, but that one has more army veterans? It would end in civil war or dictatorship within a week. It’s only the stark quantitative nature of elections that makes them hard to bias or hack.

In each case, we don’t trust any individual human to make an unbiased decision. So we design some mechanism that’s as unbiased as possible, and give it to lots of people so they can check that it’s unbiased (in a way that you can never check whether someone’s intuition is unbiased). Then we make decisions via the mechanism.

In a perfect world, we wouldn’t have to do this. In the real world, well - have you heard the story of statistical prediction rules in medicine? I̶t̶’̶s̶ ̶n̶o̶t̶ ̶a̶ ̶s̶t̶o̶r̶y̶ ̶t̶h̶e̶ ̶J̶e̶d̶i̶ ̶w̶o̶u̶l̶d̶ ̶t̶e̶l̶l̶ ̶y̶o̶u̶.̶ Some people analyzed a bunch of data, came up with an algorithm for diagnosing psychosis, and told doctors they should use the algorithm instead of their own judgment. Everyone who hears this story expects the moral to be that no amount of “smartest guy in the room” data-crunching can match the holistic experience of trained doctors with years of domain experience - but in fact the algorithms won hands-down. The interesting part (search “Goldberg Rule” in that link) is what happens when you give doctors an algorithm and tell them to use it to supplement their judgment. This doctor-algorithm team usually still does worse than the algorithm alone. And if you explain this to the doctor - say “I know you think you’re going to outperform the algorithm, but actually doctor-algorithm teams usually lose to the algorithm alone, so you’re probably better off just sticking the algorithm unless you’re really sure you have some special knowledge” - the algorithm still wins. In this case, supplementing the technocratic method with human judgment just plain worsens the technocratic method. I don’t know if this carries over to more society-relevant decisions, but it’s the sort of thing you have to consider.

The people who devise mechanisms can sometimes be biased. But they can’t apply their biases very accurately. Perhaps the decision to make Congressional districts compact hexagons instead of compact pentagons might very subtly favor Republicans in some way. But it would favor Republicans less than getting a bunch of Republicans together to draw every district in exactly the way that favors Republicans the most; there’s a limit to how biased a short hexagon-drawing algorithm can get. Likewise, if you wanted to bias a college admissions algorithm against Asians, you could perhaps weight math test scores a bit lower than English test scores. But that would bias it less than just having some guy who can look at Asian applicants and say “Nope, not him”. Also, mechanisms are transparent and can be inspected. The entire country could turn its scrutiny on the decision to weight math tests less in the algorithm - whereas if you’re just using “human judgment”, each particular example of the admissions officer rejecting an Asian will pass unknown to anyone but the candidate involved.

This isn’t trying to say mechanism is always good and judgment is always bad! We can predict cases where one might be better than the other - and as with the doctor studies and the gifted program studies, we can test when one is better than the other. I’m just trying to say it’s an interesting question that you can’t resolve just by throwing a copy of Seeing Like A State at someone.

IV.

I want to briefly respond to Weyl’s critique of rationality and effective altruism:

The effective altruism movement, which largely grew directly out of the rationalist movement, seeks to maximize the efficacy with which charitable donations are directed using standard rationalist methods. It is a tight-knit community that strongly privileges rationalist approaches over all other forms of knowledge-making (such as from the humanities, continental philosophy, or humanistic social sciences) and tends to dismiss input not formulated in rationalist terms. The community also has a strong and explicitly stated view that its activities uniquely contribute to the achievement of “the good”: of their top five recommendations of most productive careers by a leading community organization, two suggest being a researcher or support staff within the movement, and two others recommend working on the AI alignment problem (see the next point). Until recently, much of the analysis and funding emerging from the community has pointed towards a focus on extremely unlikely but potentially catastrophic risks, such as alien, asteroid or biological catastrophes.

Weyl wrote this essay a few months before COVID, so his pooh-poohing of the idea that there might be a biological catastrophe is an unfortunate anachronism. But I think it’s important to note that we got this right (and he got it wrong) precisely because we “privilege rationalist approaches over all other forms of knowledge-making”. People like Toby Ord tried to calculate the risk of every kind of disaster and how bad it would be - and at the same time Weyl was making fun of us for caring about biological catastrophes, Ord was writing about how the numbers suggested zoonotic diseases from bats could cause catastrophic pandemics. This kind of work ultimately led to EA flagship group Open Philanthropy Project spending almost $50 million on its Biosecurity And Pandemic Preparedness Program between 2014 and 2019; if other people had taken a few minutes to read our arguments instead of chiding us for how naive it is to prioritize things based on rational methods, maybe the world would have been more prepared.

Trying to be charitable to Weyl, I think he’s thinking of the EA movement as trying to perfectly quantify exactly how many lives can be saved per dollar, then following that number off a cliff. I can see how a ten-minute scan of the movement could get that impression, but I think even a thirty-minute one would correct the misimpression. EA does the quantification as a guide to other forms of reasoning. There is no way to perfectly calculate the devastation of a potential pandemic that hasn’t happened yet. But once you make even a weak effort, you notice that all the numbers are really really big. And once you make an effort to quantify the importance of the cause du jour supported by the humanities and continental philosophy, sometimes you notice that under every set of reasonable assumptions it’s smaller than the other number. And of course you supplement this with lots of trying to understand the complicated unquantifiable issues involved - under no circumstance do you skip that step - but you’re just trying to do an analysis that has some contact with some reality-based number and isn’t entirely dependent on the popular mood about something. I think this successfully achieves the difficult balance between model-laden technicalities and messy real life, and whatever form of reasoning Weyl was using to let him dismiss biological risks without doing any modeling failed to achieve that balance.

(maybe this is a bravery debate - I believe almost everyone is using fuzzy human judgment and guidance from the humanities to decide how to best improve the world, and injecting a little bit of science is a powerful innovation. If Weyl believes the opposite, then it would make sense for him to recommend the injection of a little more human judgment - although then he would have to explain how so many amazing-by-rational-standards charities were underfunded before EA came along, and how many useless-by-rational-standards ones were rolling in money.)

Moving on:

Yet, interestingly, the conclusions of the analysis emerging from the community increasingly undermine these foci and the approach of the community more broadly. In particular, recent research in the community suggests that the greatest and most probable risks to be avoided are anthropogenic (climate change, nuclear war, the rise of a totalitarian regime, other environmental catastrophes etc.). Leaders in the community have in turn suggested that the most effective ways to avoid these are likely finding solutions to problems of political organization and legitimacy of social systems to help reduce the likelihood of conflict or inability to cooperate in the provision of critical global public goods.

Weyl’s link doesn’t support his claim. It’s an 80,000 Hours page saying that that it’s important to protect the long-term future, something EA has believed since the beginning. EA started with a sort of merger and cross-pollination between people trying to save lives by eg curing malaria, and transhumanist/singularitarian groups working on protecting the long-term future; it held together because both groups agree that the other’s position can make sense given different sets of complicated assumptions about ethics and tractability. The page Weyl links highlights and explains the philosophy around that, but does not claim anyone had a sudden enlightenment where they realized they’d previously been ignoring the threat of totalitarianism and that was wrong. Everyone has always believed that stopping the rise of a totalitarian regime is one part of the important task of protecting the far-future, but we haven’t focused on it because it’s less tractable and less neglected than other parts. As far as I know we continue to think that and mostly not focus on it aside from occasionally looking for tractable interventions we’ve missed.

All of this is too bad, because if Weyl was right, that would be a huge point in favor of rational methods. Imagine pointing to a community that did something wrong for years, then learned that thing was wrong, then admitted it and changed - and imagine thinking of it as a strike against that community’s methods!

Worse, the extremely elitist, segregated, and condescending approach to philanthropy encouraged by the community has created widespread public backlash that has closely tracked a broader populist reaction to technologically-driven globalist elites and that increasingly seems to be one of the largest risk factors for precisely the sorts of catastrophes effective altruists increasingly see as the largest threats to their long-term viewpoint on universal welfare. In short, it increasingly seems like, after almost a decade of existence, a primary conclusion of the movement’s analysis may be that the movement itself is a significant part of the problem it is identifying. Earlier, more open dialog with a broader range of approaches and social classes might have illuminated this more quickly and avoided the associated waste of talent and resources, two things the movement greatly prizes.

Actually, most of the complaints I’ve heard have been from people like Weyl (Princeton PhD, Harvard postdoc, Principal Researcher at Microsoft New England). The “widespread public backlash” link goes to a book by Anand Giridharadas (Harvard PhD student, former McKinsey consultant, New York Times columnist). The actual normal people I talk to are broadly supportive. Some of my blog commenters are populist Trump supporters, and although they sometimes tell me I’m crazy for donating my money the way I do, they accept I have the right to spend it how I want and don’t bother me much about it.

I worry that Weyl is kind of cargo-culting a response to populism, a sort of “The masses hate science and reason and improving things, right? So maybe if we never do any of that stuff then they’ll let us live”. This hasn’t been my experience of the masses. My experience has been they hate elites trying to lecture them on what to do, especially if justified in pseudoscientific mumbo-jumbo that they can see through easily.

So, people who want to donate their own money according to their own understanding about what the best causes are is fine. Bill Gates isn’t officially affiliated with the EA movement, but he’s a broadly-aligned role model who donates to the same causes EAs do using the same data-based methods. Last time anyone checked, Gates had an approval rating of 76%, the second-highest of any figure asked about and literally higher than God. I don’t think Gates should be considered a backlash-inducing failure.

And would Weyl’s suggestions really help prevent populist backlashes? He wishes we abandoned our overly-rational ways in favor of “humanities, Continental philosophy, and the humanistic social sciences” - isn’t that usually code for stuff like queer theory, postcolonial theory, and postmodernism? Are working-class Trump supporters really banging on their keyboards when they read about effective altruism, shouting “YOU NEED TO STOP TRYING TO BE OBJECTIVE AND FACT-BASED, AND BE MORE OPEN TO INSIGHTS FROM QUEER THEORY AND POSTMODERNISM”?

This doesn’t necessarily mean those things are bad - populaces can backlash against all sorts of good things; maybe these fields have potentially valuable insights. But I don’t think it’s fair to demand other people optimize their philosophy to avoid completely hypothetical populist backlashes, while you’re saying the most tone-deaf populist-backlash-provoking stuff imaginable. I am a member of the populace and I am so backlashed about Weyl’s suggestions that I wrote this whole essay just to argue how wrong they were.

The section on AI is much worse than this, and I think it would be kinder to everyone involved to just pass over it entirely.

I continue to appreciate Seeing Like A State ‘s critique of…the thing it critiques, which is somewhat but not exactly similar to our word “technocracy”. I think it’s important to internalize that critique. But I think internalizing it is different from making it your One Hedgehog Tool that applies to everything everywhere. There are separate critiques of top-down intervention, mechanistic decision-making, autocracy, expert opinions, and lack of bottom-up feedback. All of those critiques are important - and all of them are matched by equally important reasons why sometimes you would want to use those things to some degree.

I think it’s important not to collapse everything into just “technocracy bad, details to be provided later”. You can’t just present Brasilia and use that as an argument against randomized controlled trials! You can’t just argue that forced collectivization of farms caused famines, therefore people shouldn’t voluntarily assess where to donate their charity money to best meet their own goals! Maybe I’m being too technocratic here, but at some point you need to break things down, look at this (social) scientifically, and try to figure out which parts of things are consistently bad and which parts sometimes seem to help.

I think the rationalist and EA communities have been working on the project of trying to develop the metis of balancing all of this correctly, and I continue to be optimistic about our progress on that front.

EDIT: See Glen Weyl’s response here.