I. Risks May Include AIDS, Smallpox, And Death

Dr. Rob Knight studies how skin bacteria jump from person to person. In one 2009 study, meant to simulate human contact, he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero - it was the equivalent of kissing another person’s hand.

His IRB - ie Institutional Review Board, the committee charged with keeping experiments ethical - disagreed. They worried the study would give patients AIDS. Dr. Knight tried to explain that you can’t get AIDS from skin contact. The IRB refused to listen. Finally Dr. Knight found some kind of diversity coordinator person who offered to explain that claiming you can get AIDS from skin contact is offensive. The IRB backed down, and Dr. Knight completed his study successfully.

Just kidding! The IRB demanded that he give his patients consent forms warning that they could get smallpox. Dr. Knight tried to explain that smallpox has been extinct in the wild since the 1970s, the only remaining samples in US and Russian biosecurity labs. Here there was no diversity coordinator to swoop in and save him, although after months of delay and argument he did eventually get his study approved.

Most IRB experiences aren’t this bad, right? Mine was worse. When I worked in a psych ward, we used to use a short questionnaire to screen for bipolar disorder. I suspected the questionnaire didn’t work, and wanted to record how often the questionnaire’s opinion matched that of expert doctors. This didn’t require doing anything different - it just required keeping records of what we were already doing. “Of people who the questionnaire said had bipolar, 25%/50%/whatever later got full bipolar diagnoses” - that kind of thing. But because we were recording data, it qualified as a study; because it qualified as a study, we needed to go through the IRB. After about fifty hours of training, paperwork, and back and forth arguments - including one where the IRB demanded patients sign consent forms in pen (not pencil) but the psychiatric ward would only allow patients to have pencils (not pens) - what had originally been intended as a quick note-taking exercise had expanded into an additional part-time job for a team of ~4 doctors. We made a tiny bit of progress over a few months before the IRB decided to re-evaluate all projects including ours and told us to change twenty-seven things, including re-litigating the pen vs. pencil issue (they also told us that our project was unusually good; most got >27 demands). Our team of four doctors considered the hundreds of hours it would take to document compliance and agreed to give up. As far as I know that hospital is still using the same bipolar questionnaire. They still don’t know if it works.

Most IRB experiences can’t be that bad, right? Maybe not, but a lot of people have horror stories. A survey of how researchers feel about IRBs did include one person who said “I hope all those at OHRP [the bureaucracy in charge of IRBs] and the ethicists die of diseases that we could have made significant progress on if we had [the research materials IRBs are banning us from using]”.

Dr. Simon Whitney, author of From Oversight To Overkill , doesn’t wish death upon IRBs. He’s a former Stanford IRB member himself, with impeccable research-ethicist credentials - MD + JD, bioethics fellowship, served on the Stanford IRB for two years. He thought he was doing good work at Stanford; he did do good work. Still, his worldview gradually started to crack:

In 1999, I moved to Houston and joined the faculty at Baylor College of Medicine, where my new colleagues were scientists. I began going to medical conferences, where people in the hallways told stories about IRBs they considered arrogant that were abusing scientists who were powerless. As I listened, I knew the defenses the IRBs themselves would offer: Scientists cannot judge their own research objectively, and there is no better second opinion than a thoughtful committee of their peers. But these rationales began to feel flimsy as I gradually discovered how often IRB review hobbles low-risk research. I saw how IRBs inflate the hazards of research in bizarre ways, and how they insist on consent processes that appear designed to help the institution dodge liability or litigation. The committees’ admirable goals, in short, have become disconnected from their actual operations. A system that began as a noble defense of the vulnerable is now an ignoble defense of the powerful.

So Oversight is a mix of attacking and defending IRBs. It attacks them insofar as it admits they do a bad job; the stricter IRB system in place since the ‘90s probably only prevents a single-digit number of deaths per decade, but causes tens of thousands more by preventing life-saving studies. It defends them insofar as it argues this isn’t the fault of the board members themselves. They’re caught up in a network of lawyers, regulators, cynical Congressmen, sensationalist reporters, and hospital administrators gone out of control. Oversight is Whitney’s attempt to demystify this network, explain how we got here, and plan our escape.

II. How We Got Here

Before the 1950s, there were no formal research ethics. Doctors were encouraged to study topics of interest to them. The public went along, placated by the breakneck pace of medical advances and a sense that we were all in it together. Whitney focuses on James Shannon’s discovery of new anti-malarials during World War II, as US troops were fighting over malarial regions of Southeast Asia). Shannon tested his theories on ambiguously-consenting subjects, including the mentally ill. But:

At a time when clerks and farm boys were being drafted and shipped to the Pacific, infecting the mentally ill with malaria was generally seen as asking no greater sacrifice of them than of everyone else. Nobody complained, major strides were made in the treatment of malaria, and Shannon received the Presidential Order of Merit.

Physicians of the time followed a sort of gentleman’s agreement not to mistreat patients, but the details were left to individual researchers. Some individual researchers had idiosyncratic perspectives:

Looking back on that era, hematologist David Nathan remembered that he applied a rough version of the Golden Rule to experiments: He would never do anything to a research subject that he would not do to himself. Once an experiment passed this threshold, however, his idea of informed consent was to say, “You are the patient. I am Doctor Nathan. Lie down.”

I believe Dr. Nathan when he said he wouldn’t do anything to patients he wouldn’t do to himself - he once accidentally gave himself hepatitis in the course of an experiment. Still, this is not the sort of rule-abidingness that builds complete confidence.

A few doctors failed to follow even the flimsiest veneer of ethics. The most famous example is the Tuskegee Syphilis Study1, but this happened towards the end of the relevant era. The debate at the time was more shaped by Dr. Chester Southam (who injected patients with cancer cells to see what would happen) and the Willowbrook Hepatitis Experiment, where researchers gave mentally defective children hepatitis on purpose2. Two voices rose to the top of the froth of outrage and ended up having outsized effects: Henry Beecher and James Shannon.

Henry Beecher was a prominent Harvard anaesthesiologist and public intellectual, known for exploits like discovering the placebo effect.3 Being well-plugged-in to the research community, he was among the first to learn about studies like Southam’s and Willowbrook, find them objectionable, and bring them to the public eye. Through public appearances and papers in prestigious journals, he dragged the issue in front of a sometimes-reluctant medical community. But he thought regulation would be devastating, and had no proposal other than “researchers should try to be good people”, which everyone except him realized wasn’t actionable.

Shannon was less brilliant, but unlike Beecher he was a practical and experienced bureaucrat. His own history of dubiously-consensual malaria research left him without illusions, but as he grew older he started feeling guilty (and also, more relevantly, became head of the National Institute of Health). Having no time for Beecher’s delusions of self-regulation, he ordered all federally-funded research to submit itself to external audits by Clinical Review Committees, the ancestors of today’s IRBs.

In the late 1960s and early 1970s, Beecher’s activism, Shannon’s CRCs, and the mounting level of Tuskegee-style scandals came together in a demand for the American Academy of Arts and Sciences to create some official ethics report. Most ethicists demurred to dirty their hands with something as worldly as medicine; after some searching, they finally tapped Hans Jonas, a philosopher of Gnosticism. In retrospect, of course bioethics derives from a religion that believes the material world is evil and death is the only escape. I’m barely even joking here:

In his most compelling passage, Jonas attacked the belief that we must pursue cures for the diseases that ravage us, that we cannot afford to forego continued medical advances. To the contrary, he wrote, we must accept what we cannot avoid, and that includes disease, suffering, and death. What society genuinely cannot afford is “a single miscarriage of justice, a single inequity in the dispensation of its laws, the violation of the rights of even the tiniest minority, because these undermine the moral basis on which society’s existence rests.” He concluded that “progress is an optional goal.”

What miscarriages of justice was Jonas worried about? He was uncertain that people could ever truly consent to studies; there was too much they didn’t understand, and you could never prove the consent wasn’t forced. Even studies with no possible risk were dangerous because they “risked” treating the patient as an object rather than a subject. As for double-blind placebo-controlled trials, they were based on deceiving patients, and he was unsure if anyone could ethically consent to one.

AAAS’ report balanced Jonas’ extreme approach with more moderate voices, producing something in between. There could be medical research, but only with meticulous consent processes intended to ensure subjects understood every risk, even the most outlandish. Rather than a straight weighing of risks vs. benefits, overseers should start with a presumption that risk was unacceptable, and weigh benefits only weakly. This framework might have evolved further, but in the uproar following Tuskegee, Congress set it in stone, never to be changed by mere mortals.

Still, Whitney thinks of this period (1974 - 1998) as a sort of golden age for IRBs. The basic structure they retain today took shape - about a dozen members, mostly eminent doctors, but one mandatory non-researcher member (often a member of the clergy). They might not know everything, but they would know things like whether smallpox still existed. They could be annoying sometimes, and overprotective. But mostly they were thoughtful people who understood the field, able and willing to route around the seed of obstructionism Jonas had planted in the heart of their institution.

This changed in 1998. A Johns Hopkins doctor tested a new asthma treatment. A patient got sick and died. Fingers were pointed. Congress got involved. Grandstanding Congressmen competed to look Tough On Scientific Misconduct by yelling at Gary Ellis, head of the Office For Protection From Research Risks. They made it clear that he had to get tougher or get fired.

In order to look tough, he shut down every study at Johns Hopkins, a measure so severe it was called “the institutional death penalty”. Then he did the same thing (or various lesser penalties) at a dozen or so other leading research centers, often for trivial infractions. Duke got the axe because its IRB hadn’t properly documented whether a quorum of members was present at each meeting. Virginia University got the axe because, although it had asked patients for consents, it hadn’t asked the patient’s family members, and one family member complained that asking the patient for a family history was a violation of his privacy.

Each doomed institution had hundreds or even thousands of studies, all ruined:

One observer wrote, “Participants cannot receive treatments, enroll, or be recruited; results from time-sensitive studies cannot be reported; and data cannot be analyzed. Suspension means that there is no money to pay graduate students, travel to conferences, or purchase equipment. It means researchers may lose months, if not years, of work.”

Millions of dollars were lost. Chancellors were fired. The surviving institutions were traumatized. They resolved to never again do anything even slightly wrong, not commit any offense that even the most hostile bureaucrat could find reason to fault them for. They didn’t trust IRB members - the eminent doctors and clergymen doing this as a part time job - to follow all of the regulations, sub-regulations, implications of regulations, and pieces of case law that suddenly seemed relevant. So they hired a new staff of administrators to wield the real power. These administrators had never done research themselves, had no particular interest in research, and their entire career track had been created ex nihilo to make sure nobody got sued.

The increases in staff to accomplish this were substantial. The staff of the Northwestern IRB, for instance, grew between the late 1990s and 2007 from two people to forty-five. These fortified IRBs were in no doubt that their mission now extended beyond protecting research subjects. As Northwestern’s Caroline Bledsoe notes, “the IRB’s over-riding goal is clear: to avoid the enormous risk to the institution of being found in noncompliance by OHRP.”

The eminent doctors and clergymen - the actual board part of the Institutional Review Board - were reduced to rubber stamps. The age of the administrator had begun. These were the sorts of people who might not know that AIDS is sexually transmitted or that smallpox is gone. Their job began and ended with forcing word-for-word compliance with increasingly byzantine regulations.

This, says Whitney, is about where we are today. There were some minor changes. Gary Ellis ironically got fired, a victim of his own unpopularity. His Office For Protection From Research Risks got subsumed into a new bureaucracy, the Office For Human Research Protections. In 2018, OHRP admitted they had gone too far and made welcome reforms - for example, certain psychology studies where people just fill out questionnaires are now exempt from many requirements. These are genuinely helpful - but on a broader cultural level, the post-Ellis atmosphere of paranoia and obstruction is still the order of the day.

III. Tales From The Administrative Age

Here are some of the stories that Whitney uses to illustrate why he’s unsatisfied with the current situation:

A. Pronovost’s Checklist Study

Maybe you’ve read Checklist Manifesto by Atul Gawande, which shows that a simple checklist with items like “wash your hands before the procedure” can reduce medical error and save lives.

Peter Pronovost of Johns Hopkins helped invent these checklists, but wanted to take them further. He proved at his own ICU that asking nurses to remind doctors to use the checklists (“Doc, I notice you didn’t wash your hands yet, do you want to try that before the procedure?”) further improved compliance - just in his one ICU, it saved about eight lives and $2 million per year. Scaled up to the entire country, it could save tens of thousands of people.

To prove that it could work in any situation, he teamed up with the Michigan Hospital Association, which included under-resourced Detroit hospitals. They agreed to ask their nurses to enforce checklists. Johns Hopkins IRB approved the study, noting that because no personal patient data was involved, it could avoid certain difficult rules related to privacy. Michigan started the study. Preliminary results were great; it seemed that tens to hundreds of lives were being saved per month. The New Yorker wrote a glowing article about the results.

OHRP read the article, investigated, and learned that Johns Hopkins IRB had exempted the study from the privacy restrictions. These restrictions were hard-to-interpret, but OHRP decided to take a maximalist approach. They stepped in, shut down the study, and said it could not restart until they got consent from every patient, doctor, and nurse involved, plus separate approval from each Michigan hospital’s IRB. This was impossible; even if all doctors and nurses unanimously consented, the patients were mostly unconscious, and the under-resourced Detroit hospitals didn’t have IRBs. The OHRP’s answer would make Hans Jonas proud - that’s not our problem, guess you have to cancel the study.

Luckily for Pronovost, Atul Gawande had recently published Checklist Manifesto and become a beloved public intellectual. He agreed to take the case public and shop it around to journalists and politicians. The OHRP woke up and found angry reporters outside their door. Whitney records their forced justifications for why the study might be harmful - maybe complying with the checklists would take so much time that doctors couldn’t do more important things? Maybe the nurses’ reminders would make doctors so angry at the nurses that medical communication would break down? Dr. Gawande and the reporters weren’t impressed, and finally some politician forced OHRP to relent. The experiment resumed, and found the nurse-enforced checklist saved about 1,500 lives over the course of the study. The setup was exported around the country and has since saved tens of thousands of people. Nobody knows how many people OHRP’s six month delay killed, and nobody ever did figure out any way the study could have violated privacy.

B. ISIS 2

Don’t be alarmed if you hear your doctor was part of ISIS 2; it’s just the International Study on Infarct Survival, second phase. This was the 1980s, the name was fine back then, that’s not why IRBs got involved.

Infarct is the medical term for heart attack. At the time of the study, some doctors had started using a streptokinase + aspirin combination to treat heart attacks; others didn’t. Whitney points out that the doctors who gave the combination didn’t need to jump through any hoops to give it, and the doctors who refused it didn’t need to jump through any hoops to refuse it. But the doctors who wanted to study which doctors were right sure had to jump through a lot of hoops.

They ended up creating a sixteen-country study. In the British arm of the study, the UK regulators told doctors to ask patients for consent, and let them use their common sense about exactly what that meant. In the US arm, the Harvard IRB mandated a four page consent form listing all possible risks (including, for example, the risk that the patient would be harmed by the aspirin tasting bad). Most of the consent form was incomprehensible medicalese. Patients could not participate unless they signed that they had read and understood the consent form - while in the middle of having a heart attack. Most decided against in favor of getting faster treatment (which, remember, basically randomly did vs. didn’t include the study drugs).

The US recruited patients 100x slower (relative to population) than the UK, delaying the trial six months. When it finally ended, the trial showed aspirin + streptokinase almost halved heart attack deaths. The six month delay had caused about 6,000 deaths.

Later research suggested that long consent forms are counterproductive. A study by Lasagna and Epstein experimented with giving patients three different consent forms for a hypothetical procedure - then quizzing them on the details. Patients with a short consent form that listed only the major risks got twice the score on a comprehension test compared to those with the longer form; they were also less likely to miss cases where their medical histories made the study procedure dangerous (eg a person with a penicillin allergy in a study giving penicillin). Lasagna and Epstein’s longest consent form was still shorter than the forms in real studies like ISIS-2.

It seems to be a common position that existing consent forms fail patients; at least Whitney is able to find many lawyers, ethicists, and other authorities who say this. The OHRP occasionally admits it in their literature. And patients seem to believe it - in a survey of 144 research subjects, most described the consent form as “intended to protect the scientist and the institution from liability” rather than to inform the patient. Still, they do protect the scientist and institution from liability, so the consent forms stay.

My own consent form story: in my bipolar study, the IRB demanded I include the name of the study on the form. I didn’t want to - I didn’t want to bias patients by telling what we were testing for. Next they wanted me to list all the risks. There was no risk; we would be giving the patient a questionnaire that we would have given them anyway. The IRB didn’t care; no list of risks, no study. I can’t remember if I actually submitted, or only considered submitting, that the risk was they would get a paper cut on all the consent forms we gave them. In any case, another doctor on my team found a regulation saying that we could skip this part of the consent form for our zero-possible-harm study. The IRB accepted it, let us start the study, then changed their mind and demanded the full consent form along with their 26 other suggestions.

C. PETAL

If your lungs can’t breathe well, doctors can put you on a ventilator, which forces air in and out. It’s hard to get ventilators working right. Sometimes they push in too much air and injure your lungs. Sometimes they push in too little air and you suffocate. There are big fights about what settings to run ventilators on for which patients. For a while, doctors fought over whether to set ventilators on high vs. low, with many experts in each camp. Researchers formed a consortium called PETAL to study this, ran a big trial, and found that low was better. Lots of doctors switched from high to low, and lots of patients who otherwise would have gotten lung injuries lived to breathe another day.

Flush with success, PETAL started a new study, this time on how to give fluids to ventilator patients. Once again, doctors were divided - some wanted to give more fluids, others less. By mid-2002, PETAL had recruited 400 of the necessary 1000 patients.

Then OHRP demanded they stop. Two doctors had argued PETAL’s previous ventilator study was unethical, because they had only tested high vs. low ventilator settings, not middle ones. OHRP wanted them to stop all their current work while they investigated. They convened a panel of top scientists; the panel unanimously said their past research was great and their current research was also great, using terms like “landmark, world-class investigations”. They recommended the study be allowed to restart.

OHRP refused. Its director, ethicist Jerry Menikoff, had decided maybe it was unethical to do RCTs on ventilator settings at all. He asked whether they might be able to give every patient the right setting while still doing the study4. The study team tried to explain to him that they didn’t know which was the right setting, that was why they had to do the study. He wouldn’t budge.

Finally, after a year, another panel of experts ruled in favor of the investigators and gave them permission to restart the study right away. They did, but the delay was responsible for thousands of deaths, and produced a chain effect on ventilator research that made us less prepared for the surge in ventilator demand around COVID.

IV. Hard Truths

Doctors are told to weigh the benefits vs. costs of every treatment. So what are the benefits and costs of IRBs?

Whitney can find five people who unexpectedly died from research in the past twenty-five years. These are the sorts of cases IRBs are set up to prevent - people injected with toxic drugs, surgeries gone horribly wrong, the like. No doubt there are more whose stories we don’t know. But as for obvious, newsworthy cases, there are ~2 per decade. Were there more before Ellis’ 1998 freakout and the subsequent tightening of IRB rules? Whitney can’t really find evidence for this.

What are the costs? The direct cost of running the nation’s IRB network is about $100 million per year. The added costs to studies from IRB-related delays and compliance costs is about $1.5 billion/year. So the monetary costs are around the order of $1.6 billion.

What about non-monetary costs? Nobody has fully quantified this. Some Australian oncologists did an analysis and found that 60 people per year died from IRB-related delays in Australian cancer trials. 6,000 people died from delays in ISIS-2, and that was just one study. Tens of thousands were probably killed by IRBs blocking human challenge trials for COVID vaccines. Low confidence estimate, but somewhere between 10,000 and 100,000 Americans probably die each year from IRB-related research delays.

So the cost-benefit calculation looks like - save a tiny handful of people per year, while killing 10,000 to 100,000 more, for a price tag of $1.6 billion. If this were a medication, I would not prescribe it.

Whitney doesn’t want a revolution. He just wants to go back to the pre-1998 system, before Gary Ellis crushed Johns Hopkins, doctors were replaced with administrators, and pragmatic research ethics were replaced by liability avoidance. Specifically:

  • Allow zero-risk research (for example, testing urine samples a patient has already provided) with verbal or minimal written consent.

  • Allow consent forms to skip trivial issues no one cares about (“aspirin might taste bad”) and optimize them for patient understanding instead of liability avoidance.

  • Let each institution run their IRB with limited federal interference. Big institutions doing dangerous studies can enforce more regulations; small institutions doing simpler ones can be more permissive. The government only has to step in when some institution seems to be failing really badly.

  • Researchers should be allowed to appeal IRB decisions to higher authorities like deans or chancellors5

These make sense. I’m just worried they’re impossible.

IRBs aren’t like this in a vacuum. Increasingly many areas of modern American life are like this. The San Francisco Chronicle recently reported it takes 87 permits, two to three years, and $500,000 to get permission to build houses in SF; developers have to face their own “IRB” of NIMBYs, concerned with risks of their own. Teachers complain that instead of helping students, they’re forced to conform to more and more weird regulations, paperwork, and federal mandates. Infrastructure fails to materialize, unable to escape Environmental Review Hell. Ezra Klein calls this “vetocracy”, rule by safety-focused bureaucrats whose mandate is to stop anything that might cause harm, with no consideration of the harm of stopping too many things. It’s worst in medicine, but everywhere else is catching up.

This makes me worry that we can’t blame the situation on one bad decision by a 1998 bureaucrat. I don’t know exactly who to blame things on, but my working hypothesis is some kind of lawyer-adminstrator-journalist-academic-regulator axis. Lawyers sue institutions every time they harm someone (but not when they fail to benefit someone). The institutions hire administrators to create policies that will help avoid lawsuits, and the administrators codify maximally strict rules meant to protect the institution in the worst-case scenario. Journalists (“if it bleeds, it leads”) and academics (who gain clout from discovering and calling out new types of injustice), operating in conjunction with these people, pull the culture towards celebrating harm-avoidance as the greatest good, and cast suspicion on anyone who tries to add benefit-getting to the calculation. Finally, there are calls for regulators to step in - always on the side of ratcheting up severity.

This is how things went in 1998 too. One researcher made a mistake and killed a patient. This made a sensational news story (unlike the tens of thousands of people who die each year from unnecessarily delayed research), so every major newspaper covered it. Academic ethicists wrote lots of papers about how no amount of supposed benefit could ever justify a single research-related death. The populace demanded action, Congress demanded the regulator regulate harder, and Ellis ratcheted up the IRB level. Hospitals hired administrators to comply with the new regulation, and lawyers lurked in the shadows, waiting to sue any hospital that could be found violating the new rules.

So why are things so much worse than the 1970s-90s IRB golden age? I blame a more connected populace (cable TV, the Internet, Twitter, etc), a near-tripling of lawyers per capita, and a lack of anything better to worry about (research was fastest during the World Wars, when the government didn’t have the luxury to worry about consent form length). This is my Theory of Everything; if you don’t like it, I have others.

Whitney tries to be more optimistic. A few ethicists (including star bioethicist Ezekiel Emanuel) are starting to criticize the current system; maybe this could become some kind of trend. Doctors who have been ill-treated are finding each other on the Internet and comparing stories. Greg Koski said that “a complete redesign of the approach, a disruptive transformation, is necessary and long overdue”, which becomes more impressive if you know that Dr. Koski is the former head of the OHRP, ie the leading IRB administrator in the country. Whitney retains hope that maybe Congress or something will take some kind of action. He writes:

James Shannon’s IRB system, as established in 1966 and solidified by law in 1974, was an experiment, as are all attempts to manage our complex and changing society. Congress should try again, but it need not do so blindly. The present system’s vicissitudes make apparent some traps to avoid, while advances in public policy and risk management suggest a better approach. No system will be perfect, but we can do better, and doing so will protect subjects’ rights and welfare while improving the life of the nation, and the world.

  1. Whitney notes that Tuskegee University, which lent some facilities for the study but was otherwise uninvolved, is justly upset at being associated with this atrocity. He would prefer to call it the US Public Health Service Syphilis Study after the actual perpetrators. Today we remember the bold whistleblowers who blew the lid off this scandal, but I didn’t realize how circuitous the path to exposure was. The researchers spent years being pretty open about their project to the rest of the research community. Peter Buxtun, an incidentally involved social worker (also “a libertarian Republican, former army medic, gun collector, and NRA member” - social workers were different in those days!) heard about it, was horrified, and tried to get it shut down. The relevant oversight board involved listened to his complaints politely, then decided there was no problem (the only issue the board flagged was the risk that it might make them look racist). Buxtun spent six years complaining about this to various uninterested stakeholders until finally a reporter listened to him and published an expose.

  2. It’s not as bad as it sounds - adult staff at this state run school kept getting severe cases of hepatitis. Scientists investigated, and suspected that children had asymptomatic cases and were passing it on to staff. With parents’ permission, they deliberately infected some children with hepatitis to prove that it would be asymptomatic in them, and that therefore they must be the source of the staff infections. They were right, and their research led to better protection for staff with no negative effect on the children themselves. Still, the one sentence summary sounds pretty awful.

  3. I’m interested in great families, so I had to check if he was a member of the famous Beecher family of Boston Brahmins (think Harriet Beecher Stowe). Any relationship, if it existed, was very distant - born Henry Unangst, he changed his name to trick people into thinking he was higher-status. Ironically, he became more famous than any of them, and probably increased their prestige more than they increased his. I’m still against this; it cost us the opportunity to call the placebo effect “the Unangst Effect”.

  4. In one of his papers, he wrote: “How would you feel if your doctor suggested - not as part of a research study - that he pick the treatment you get by flipping a coin? Very few of us, as patients, would accept this type of behavior.”

  5. I’m mostly a fan of Whitney’s suggestions, but I’m not sure about this one. On the one hand, I understand why it would be good if, when IRBs make terrible decisions, researchers could appeal. But it also seems strange to have a panel of experts (eminent doctors, clergymen, etc) who can be overruled by a non-expert (a dean). Also, I find it hard to imagine a dean would ever do this - if anything ever goes wrong (or even if it didn’t) “the experts said this was unethical, but the dean overruled them” doesn’t sound very good.