How Trustworthy Are Supplements?
[Epistemic status: not totally sure of any of this, I welcome comments by people who know more.]
Not as in “do supplements work?”. As in “if you buy a bottle of ginseng from your local store, will it really contain parts of the ginseng plant? Or will it just be sugar and sawdust and maybe meth?”
There are lots of stories going around that 30% or 80% or some other very high percent of supplements are totally fake, with zero of the active ingredient. I think these are misinformation. In the first part of this post, I want to review how this story started and why I no longer believe it. In the second and third, I’ll go over results from lab tests and testimonials from industry insiders. In the fourth, I’ll try to provide rules of thumb for how likely supplements are to be real.
I. Two Big Studies That Started The Panic Around Fake Supplements
These are Newmaster (2013) and an unpublished study sponsored by NY attorney general Eric Schneiderman in 2015.
Both used a similar technique called DNA barcoding, where scientists check samples (in this case, herbal supplements) for fragments of DNA (in this case, from the herbs the supplements supposedly came from). Both found abysmal results. Newmaster found that a third of herbal supplements tested lacked any trace of the relevant herb, instead seeming to be some other common plant like rice. Schneiderman’s study was even more damning, finding that eighty percent of herbal supplements lacked the active ingredient. These results were extensively and mostly uncritically signal-boosted by mainstream media, for example the New York Times (1 ,2) and NPR (1, 2), mostly from the perspective that supplements were a giant scam and needed to be regulated by the FDA.
The pro-supplement American Botanical Council struck back, publishing a long report arguing that DNA barcoding was inappropriate here. Many herbal supplements are plant extracts, meaning that the plant has one or two medically useful chemicals, and supplement manufacturers purify those chemicals without including a bunch of random leaves and stems and things. Sometimes these purified extracts don’t include plant DNA; other times the purification process involves heating and chemical reactions that degrade the DNA beyond the point of detectability. Meanwhile, since supplements may include only a few mg of the active ingredient, it’s a common practice to spread it through the capsule with a “filler”, with powdered rice being among the most common. So when DNA barcoders find that eg a ginseng supplement has no ginseng DNA, but lots of rice DNA, this doesn’t mean anything sinister is going on.
Wouldn’t you expect scientific experts and attorney generals to know this sort of thing already? The American Botanical Council certainly expected that, and its report was pretty scathing:
The paper by Newmaster et al. is problematic on a number of levels. The approach taken by the authors was to create a number of homemade definitions and then evaluate materials against those definitions using DNA fingerprinting. This would perhaps be acceptable if a review of the list of cited references had not indicated that the authors are apparently deeply unaware of herbal quality-assurance procedures and programs and the nature of commercial botanical products. There are internationally recognized definitions for identity, authenticity, contamination, and substitution. Invention of new definitions for these terms by the authors in order to demonstrate the novelty of their approach and their technical virtuosity is self-referential and, unfortunately, very possibly self-serving. Their apparent lack of adequate knowledge in this field has allowed them to create a virtual problem and then, figuratively, ride to the rescue and solve it.
You may also enjoy reading the part of the report where they just list every single nitpicky mistake the authors have ever made and how angry they are about all of them:
Furthermore, in Table 1, the authors used the scientific name “Plantago ovate. “ Presumably, they are referring to P. ovata(Plantaginaceae), and their spellchecker automatically tried to correct “ovata” to “ovate,” an error that the authors, and, presumably, peer reviewers, should have detected. Even though this might seem trivial, and, in some cases, it might be, it implies that either the authors and/or the peer reviewers are not adequately familiar with the scientific botanical nomenclature, and presumably, other aspects of botany and medicinal plants that could have informed a more cohesive and reliable publication.
So far, this seems like a pretty normal scientific dispute. But I think subsequent events pretty firmly supported the Botanical Council’s side.
Attorney General Schneiderman tried to leverage his study into a lawsuit against supplement manufacturers. He got a lot of pushback, with even anti-supplement scientists coming out against his data (1, 2). GNC, one of the companies that failed the original study, sent the same supplements to a respected third-party lab, who said all of them seemed totally fine. The end result of the lawsuit seemed inconclusive to me - GNC agreed to test its products better, and the Attorney General declared victory - but people who know more about law and the industry suggest this was a face-saving measure allowing the Attorney General to gracefully retreat. The Forbes article was Supplement Companies Sitting Pretty After AG’s Blunder.
Meanwhile, Dr. Steven Newmaster, lead author of the original study, has had one of his other DNA barcoding papers retracted for suspected fraud. Science magazine did an investigative report on him, claiming that:
An investigation by Science found the problems in Newmaster’s work go well beyond the three papers. They include apparent fabrication, data manipulation, and plagiarism in speeches, teaching, biographies, and scholarly writing. A review of thousands of pages of Newmaster’s published papers, conference speeches, slide decks, and training and promotional videos, along with interviews with two dozen current and former colleagues or independent scientists and 16 regulatory or research agencies, revealed a charismatic and eloquent scientist who often exaggerated, fabulized his accomplishments, and presented other researchers’ data as his own.
It paints a picture of Newmaster as founding a DNA barcoding company, then publishing fraudulent studies to prove DNA barcoding was better than all other testing methods, or that companies who refused to use his DNA barcoding had fake or contaminated products. The particular 2013 study indicting the supplement industry has not been retracted, with the journal defaulting to an internal investigation by Newmaster’s university that found no wrongdoing, but Science suggests the internal investigation was biased and we shouldn’t trust it. Overall I don’t have very much confidence in this.
But at least Attorney General Schneiderman, while not doing very well scientifically, hasn’t been completely discredited as a human being, right?
Jane Mayer and Ronan Farrow reported in The New Yorker that Schneiderman had physically abused at least four women during his tenure as Attorney General. According to the report, Schneiderman had, between about 2013 and 2016, committed acts of violence against three romantic partners (blogger and activist Michelle Manning Barish, author and actress Tanya Selvaratnam, and a third woman), as well as an unnamed female attorney. The women said that Schneiderman had choked, hit or violently slapped them, all without their consent. Selvaratnam added that Schneiderman spat on her, choked her, called her his “brown slave,” ordered her to call him “Master” and say that she was “his property,” and demanded that she find another woman who would be willing to engage in a ménage à trois. Both Selvaratnam and Barish alleged that Schneiderman engaged in a pattern of alcohol abuse, and that he had threatened to kill them if they ended their respective relationships with him.
Oops, it looks like Schneiderman is among the approximately 100% of New York elected officials who have resigned after being accused of egregious sexual harassment. According to Wikipedia, he was most recently spotted retraining to become a meditation teacher, which is not how I expected this saga to end. Maybe he can recommend his students take herbs!
Anyway, as far as I know none of the media sources that signal-boosted the original story have ever apologized or discussed whether they still agree with it, and many of the people I talk to are still letting the original coverage shape their opinions. I think at this point those studies are untrustworthy and shouldn’t be determining our beliefs here.
II. What Consumer Review Laboratory Sites Say
A few companies do Consumer Reports style analyses of supplement brands. For a fee (or sometimes for free, supported by ads), they will analyze supplements and tell you what they find. The two biggest sites I know of in this space are LabDoor and ConsumerLab.
Some people have criticized LabDoor. They claim they have limited transparency, and that their ranking system is bad: they subtract points for companies that go slightly over the amount of product on a label (eg they say they have 100 mg active ingredient, but actually have 120 mg). But it’s impossible to always hit an exact target (eg 100 mg of ingredient) and reputable companies will make sure that they’re more likely to exceed the target by a little bit (eg have 120 mg) rather than go below it. These kinds of mild excesses aren’t dangerous and are considered industry standard, but LabDoor penalizes them as much as serious errors. [EDIT: LabDoor responds here]
But as long as we look at their raw data, we should be able to avoid any rating problems. And the same people are positive about ConsumerLab. So I think looking at data from both these companies could be a good way to figure out how accurate supplement labels are.
Like most labs, these don’t use the questionable DNA barcoding techniques mentioned in Part I. They use primarily chromatography and spectroscopy, normal well-validated methods for these kinds of tests. Still, there are some complications. Most herbs have dozens of closely related chemicals in them. Someone does a study suggesting that one of them is good for you, and then supplement makers try to extract that one. But other times supplement makers disagree, or prefer to extract some very-closely-related chemical which probably does the same thing as the first chemical. Depending on how they do this, they might fail a test which is too myopically focused on amount of the first chemical present.
Easy Mode: Magnesium
Magnesium is a metal. There is no complicated extraction process. There’s no debate over which ingredient is the active one. It’s just magnesium. You either have it or you don’t. Some suppliers bind it to different molecules, and others do special things to make it bioavailable, but something that claims to have 100 mg magnesium should always have 100 mg magnesium.
Labdoor analyzes 30 brands of magnesium supplement. 25 earn As, 3 earn Cs, and 2 flunk. Of the two that flunk, one has only 60% as much magnesium as claimed, and the other has almost 3x as much magnesium as claimed. No product has an unsafe amount of heavy metals, although the worst have between a third and half of the government’s safety limit.
ConsumerLab analyzes twelve magnesium brands. Eleven pass and one fail. The failure had only about 80% as much magnesium as claimed. The brand that Labdoor said had 3x the claimed amount of magnesium was completely fine according to ConsumerLab, although Labdoor checked the neutral flavor and ConsumerLab checked the raspberry flavor. The company involved claims to have done an investigation and found that their supplement had the amount they claimed, so it’s possible Labdoor was in error here.
Overall this seems good, with almost all brands having close to their labelled amount of ingredient, and the worst having 60-80%. One brand may have gone way above labelled amount, but this is questionable and I suspect a testing error.
Hard Mode: Bacopa and Ashwagandha
Bacopa and ashwagandha are Ayurvedic (Indian traditional medicine) herbs used for stress. I’m listing them as hard mode because turning it into a supplement usually involves extracting active chemicals (called withanolides) from the plant. If there were problems with herbal medicines, this would be where we would expect to find them. Labdoor has bacopa but not ashwagandha, and ConsumerLab has vice versa, so I’m combining them for this investigation.
Labdoor investigates three brands of bacopa, giving two Bs and one C. These grades are only lower than As because they find the brands have slightly more bacopa than promised on the label. As mentioned above, this shouldn’t be considered an error and a more reasonable version of LabDoor would probably have given As to all of these.
ConsumerLab investigates fifteen brands of ashwagandha. They approved 11, were “uncertain” about 3, and rejected 1. One “uncertain” company was uncertain because they claimed to be extracting different chemicals than the ones ConsumerLab was set up to detect - but it was a reputable brand and I give them the benefit of the doubt here; two others had accurate labels but may or may not have been underdosing. The reject was Himalaya Ashwagandha, which claimed to have 3 mg of the active ingredient, but really had 3.3 mg. This would normally qualify as okay, but ConsumerLab says that based on the extraction process they used they should have had 4.4, and they are confused why this didn’t happen. I have trouble holding this against Himalaya given that their label is basically correct. Himalaya also gets dinged because apparently a reasonable dose of this product would be 6 mg, which they do not reach.
But their tagline for this page is “Only 56% Of Products Passed Tests”, and I see discussion on Reddit about how “only 3 products passed”. I’m pretty confused by this; maybe this is true of an earlier version of the page, but they’ve since updated it and products have gotten much better since then?
Pending figuring out what’s going on with the ashwagandha, my impression here is that most supplements have very close to what they say on the label. When they don’t, it’s more likely to be deviations of 25-50%, rather than complete fraud where the pill is full of rice or sawdust or something.
I looked through several other supplements on these sites and these results are typical, although maybe slightly better than average. The most concerning category was mushrooms, where about 25% of brands used some mycelium (the “roots” of a mushroom, which have fewer health-promoting chemicals than the above-ground part) instead of or in addition to the mushroom itself. This is a known issue with mushroom supplements; not enough people know the difference for companies to be consistently incentivized to get it right.
III. MYASD From Nootropics Depot
Nootropics Depot is a supplement company that actively engages with the supplement community on Reddit. A big part of the engagement is their CEO, who goes by the Reddit username MisterYouAreSoDumb, talking about his experiences running the company and answering customer questions.
A common feature of his experiences is worrying that his competitors are doing a bad job with lab testing, or getting an unfair advantage by skimping on active ingredient in their product. His writings on this have shaped my understanding of this area and I wouldn’t feel comfortable posting this essay without including some of them. Obviously these are potentially biased (he owns a competing company!) but I still find them useful.
I’m going to err on the side of posting many of his very long comments, because I love this stuff - but if you get bored, read one or two and then skip down to the conclusion section.
Here’s MYASD on Magtein, a special patented bioavailable form of magnesium:
We’ve actually tested other Magtein products on the market and found they contain less than stated on the label. Some of them had half of what they claimed. We have argued with [manufacturer] AIDP about it, and sent them multiple products to go after. I am not sure how they handle it in the background, but not much seems to have changed.
Also, you can get magnesium L-threonate made in China WAY cheaper than from AIDP. However, that violates their patent. China doesn’t care, though. They offer it to us all the time. We obviously don’t use it, but I know other brands are. Let’s say you buy 1,000kg from AIDP and you mix it with 2,000kg from China. Well you have just reduced your costs by more than HALF. There would be no way to tell in the lab, and no way for AIDP to prove that is what happened. I know other brands are doing this, as some of them were selling below our cost from AIDP.
Then AIDP put a MAP agreement in place, or minimum adverted pricing agreement. This is a standard thing with patented ingredients to ensure everyone selling it follows the rules from the licensed distributor. Obviously we follow the rules exactly, like always do, but other brands regularly break the MAP agreement. We also send these to AIDP, and they claim they are forcing the brands to adjust their pricing to comply. However, these brands keep doing it […]
It’s really hard to be competitive when we are literally the only ones following the rules! However, we have no power here. We are not the patent holder, nor are we the licensed distributor….We’ve asked if they could be more flexible on price before, because they are marking it up a LOT from what it costs to produce. However, they say they are using the profits from it to do more research, so they can’t lower prices. I don’t know, the whole thing is fucking ridiculous. We literally can’t lower prices. Our margins are razor thin as it is. So either these big companies are putting less in the capsule than stated (we have proven that on some of them), are mixing generic Chinese L-threonate powder into runs to bring the cost/mg down, or they are getting favorable pricing from AIDP that we just can’t get. Honestly, it’s probably a mixture of all three, so we just get fucked […]
If you knew the half of the shit that goes on in the background of this industry, you’d be disgusted. My lab director went to the AOAC conference last week. Scientists from most of the analytical labs in the US were there, and many of the quality directors from the bigger brands. My lab director was just openly calling products out that we tested and had failed, and everyone was looking at him like he was breaking decorum. There is an unspoken rule in this industry that you don’t call out other brands for quality issues, because you know you have some of your own. It’s insane! Everyone knows all the products fail. Everyone knows almost nobody is doing things right. However, the status quo makes too many people too much money to change. He was going back and forth with the lab director from NOW Foods, and they have been doing similar things to us. They have been buying products on Amazon and testing them in their lab. Surprise surprise, tons are failing. However, they can’t get Amazon to do anything about it. The unwritten rule there is that they don’t want to hear about quality issues. They want to put their fingers in their ears and go la la la la laaaaaa. Truly! You can test this yourself. Write Amazon support asking how to report fake reviews, and they will give you a place to report them. Then ask where you can reports fake or impure product, and they will literally stop talking to you. We have tried. We’d love to just send Amazon our testing results of their top products failing lab testing, but they shut any discussion of it down. There’s not much money in admitting you have been selling products that don’t meet labels claims, and it is so widespread that fixing it would upend the entire industry, so covering their eyes and ears is their choice. In fact, we have been warned that if we make too many waves, we might be punished instead. They might just shoot the messenger because that is easier.
So anyway, that’s the situation. To say I am frustrated doesn’t even scratch the surface. However, I can only fight so many fights at once. Even so, we were able to add a 540ct bottle that gets the price per dose to the same as Jarrow and LEF, while still complying with the TMLA and MAP agreements. Ours will also always have what we claim.
More on milk thistle:
[Big supplement corporation] NOW has their own in-house lab in Chicago. That’s where they test everything. It’s about 3-4 times bigger than our lab. Of course they are much bigger than we are, and have been around much longer. That being said, their milk thistle also failed our testing. It had 56% of the claimed silymarins. Literally everyone’s milk thistle that we tested failed. That’s because they use the faulty UV-Vis number that overstates the actual silymarins by double. Everyone is claiming 80% silymarins, but it really only has 40-50% when tested properly with HPLC or UPLC. Real 80% and 90% silmarin extracts do exist, but they are much more expensive. Why use those when you can just sell the cheaper 40% and claim 80% like everyone else? That’s the problem with this industry. If you are the only one properly labeling your stuff, you look less potent than everyone else that is improperly labeling.
We have been working on a reishi extract specifically targeting the ganoderic acids for a long time, but lab testing limitations made that difficult. For a while one of our competitors was selling a reishi extract that they claimed contained 6% triterpenes, which ganoderic acids are. However, the lab they were using was a fake dry lab. When they sent the sample to Alkemist, it turns out it was only 1.8%. That’s still pretty good, as most of the stuff on the market we tested has almost none. However, they charge ridiculous amounts for it! We wanted to do a much more potent one for a much better price. It took us a while to get it right, and it turns out a straight ethanol extraction was the best way to get the highest ganoderic acids. We’ve been able to get as much as 15% in some extractions, but we settled on 9% as the standardization, as that is what we can reliably hit every time. This batch has over 10%. We are also only taking into account the 10 ganoderic acids as part of the USP list. There are other ones in this extract that are not a part of that 9% number, so this is a VERY potent extract.
On tongkat ali:
100:1 and 200:1 are lies. They are fake ratios made up to sell more tongkat. We’ve spoken to all the suppliers, and they have admitted that they have to call them that because that is what customers want to hear. They are more like 4:1 extracts. Even then, we have tested a ton of them on the market. Most have zero detectable eurycomanone. Some people are just selling non-extracted root and calling them 100:1 or 200:1 extracts. You can’t test for extraction ratios. Anyone can claim anything they want. I can say ours is a 1,000,000:1 extract. It’s all made up meaningless bullshit. You assay for eurycomanone. That’s how you lab test for the potency of a tongkat ali extract. We’ve done that with tons of product on the market, and the results are horrible. If you straight up ask the Chinese suppliers: “Are these REALLY 100:1 extraction ratios?” they will admit they are not. It’s ridiculous! They just make up fake ratios to tell everyone, and it has been going on so long everyone just plays along with the lie.
On fadogia, a plant which Joe Rogan’s podcast recently promoted as a testosterone-booster:
Huberman went on Rogan, and suddenly a rare plant from Nigeria that was not offered by anyone before is suddenly being offered by every single Chinese supplier out there almost overnight. We literally had random Chinese companies emailing us the same week that podcast came out offering us Fadogia. Do you expect me to believe that a rare plant found in a small part of western Africa, that is difficult to cultivate even in the region it is from, suddenly had a whole supply chain set up within a week?!? A plant that was not offered in China the night before, and doesn’t grow in China, suddenly popped into existence overnight because of one Joe Rogan podcast? I guess all these other shitty brands believed it, or they just don’t fucking care about everyone’s health enough to question it. There’s money to be made! We won’t let science or validation get in the way of those profits!
Want an easy way to know who not to buy from? Basically go find any company selling Fadogia agrestis right now. Make a list of all of them. Then never buy from any of those companies ever again. Super simple!
And in response to someone asking him directly how bad the industry was, and whether lots of vitamins from reputable brands had no active ingredient:
Absolutely none? Probably zero from bigger brands. 50% off? Truthfully, I have not gone out and tested basic vitamins from brands. We have stuck to mostly botanicals and other small molecule organics. My assumption is that basic vitamins should be more accurate across the industry. However, that’s not based on data. That’s just me assuming it is, based on the more fleshed out methods and standards for basic vitamins. I could be wrong. Brands that I used to trust have failed for basic things like milk thistle after we tested it, so anything is possible. The biggest issue is going to be accuracy to label claims. We looked through some of the COAs from a big contract manufacturer that supplies many brands across the industry, and they had EXTREMELY wide acceptance windows for vitamins. Like lets say the label claimed 100iu of vitamin E. They would accept a batch with 200iu. That’s crazy to me.
We set reasonable overages (~10%), and we set them in such a way that it accounts for any machine variance. So let’s say you are running on a machine that has a variance of +/-5%. This means if you are shooting for 100mg, half might be 95mg and half might be 105mg. So we set our target for 105mg. That way the low end of the variance is still at label claim. We don’t just YOLO a bunch more into the capsule to ensure we meet claims, resulting in people getting way higher doses than they expect. However, we have seen that in a lot of brands out there. The way the regulations are written is that any ingredient has to be at or above label claim “within reason.” You hear some people saying that you can have a +/- 20% variance, but that is not what the regulations state. Those are specifically for naturally occurring vitamin content in a non-standardized botanical. What that means is that if you are using orange peel powder, you will put the expected vitamin C content per dose on the label. The FDA gives you a 20% variance to account for nature not always being the same. However, if you put vitamin C in yourself, it has to be at or above what you claim, within reason. The same goes for standardized extracts. You can’t say you are putting 100mg of a 10% vitamin C orange peel extract, then only have it have 8mg vitamin C… If you make claims to standardization or content of ingredients, you have to ALWAYS be at or above that number. However, not a single contract manufacturer we have worked with does that. They all formulate to target, and just accept the variance. That variance from USP is this:
> Weigh 20 intact capsules individually, and determine the average weight. The requirements are met if each of the individual weights is within the limits of 90% and 110% of the average weight.
> If not all of the capsules fall within the aforementioned limits, weigh the 20 capsules individually, taking care to preserve the identity of each capsule, and remove the contents of each capsule with the aid of a small brush or pledget of cotton. Weigh the emptied shells individually, and calculate for each capsule the net weight of its contents by subtracting the weight of the shell from the respective gross weight. Determine the average net content from the sum of the individual net weights. Then determine the difference between each individual net content and the average net content: the requirements are met if (a) not more than 2 of the differences are greater than 10% of the average net content and (b) in no case is the difference greater than 25%.
> If more than 2 but not more than 6 capsules deviate from the average between 10% and 25%, determine the net contents of an additional 40 capsules, and determine the average content of the entire 60 capsules. Determine the 60 deviations from the new average: the requirements are met if (a) in not more than 6 of the 60 capsules does the difference exceed 10% of the average net content and (b) in no case does the difference exceed 25%.
So you can see that USP allows +/- 10% from label claim for fill weight, with some capsules being allowed to be as much as 24.9% off. However, USP is not the FDA, and the FDA regulations clearly state you need to be at or above label claim, within reason. So if you just use USP capsule weight variance testing, you could be “passing” for USP and “failing” for FDA. This is because the USP weight variance testing is for fill weights, not contents as they relate to label claims. This means you need to build your processes intelligently, calculate your exact manufacturing variance, then account for that manufacturing variance in your formulation. Our internal manufacturing variance is much much tighter than USP, because we have multiple control points throughout the whole process. We lab assay at formulation, to ensure the formulation actually accomplished what we set out to. Then we weight variance test in-process, to ensure our manufacturing is staying in-line with our targets. Then we weight variance test and lab assay again after manufacturing, to ensure the entire batch meets specs for label claim, that all capsules are within variance spec, and that our manufacturing process went smoothly. I can’t stress enough how very few companies out there do that level of testing. Almost nobody does; even the big guys. It’s a pain in the ass, and only works with your manufacturing flow if you built your processes around quality control, not using QC as an afterthought. There are a number of reasons we do that. One: I think accuracy and precision is worth the time and cost. Two: we know for absolute certainty that our products claim what we say they do. Three: we lower raw material waste and losses, and prevent costly reworks, since we can catch any variance happening before it affects millions of capsules/tablets. Then we can do a root cause analysis to figure out why, correct that, then move on with the batch accurately. Four: because those who throw stones shouldn’t live in glass houses. If I am going to call out actors in this industry, I better be damn sure I have my own house in order first. Five: cause fuck them, that’s why!
So you have multiple places where inaccuracies and variance can happen. Obviously that starts at the raw material. If the raw material doesn’t contain what you think it does, then nothing after that will contain what you claim it to. Since very few companies test every single batch/lot of every raw material that enters their facilities, and instead rely on skip-lot testing, this is a big failure point for many brands. However, that’s just the start of the process. Testing the raw material is all well and good, but if you don’t build your processes around accuracy and precision, it really doesn’t matter. Formulation is the next place where inaccuracies can happen. Most brands don’t have in-house labs, so they can’t do in process formulation assays like we do. They have to assume they calculated and formulated right, then assume they mixed correctly in line with that formulation. Most formulation techs don’t give a shit, so this is a step that causes inaccuracies often. The next place that inaccuracies arise is the actual manufacturing stage. If your formulation is right, it really doesn’t matter if your encapsulation machine is putting too much, too little, or too variable of that formulation into the capsules. Again, almost nobody does the level of in-process testing we do, so the manufacturing step is a place that inaccuracies arise a lot. Now imagine you did none of it… You didn’t test that specific batch of raw material to ensure it contained the % active you claimed. You didn’t assay the formulation to ensure you did your math right, and your mixing staff actually did their jobs right. Then you didn’t measure and assay in-process during manufacturing. You just assumed your machines were filling correctly. Then you didn’t assay the finished batch of capsules when it was done. Remember, none of those things are FDA requirements. They don’t say you HAVE TO test like we do. They just say the end result has to be at or above label claim, within reason. How you get there is up to you as a brand. Coincidentally, it’s also the brand that is on the hook legally if things are not right. It’s not the lab that you used. It’s not the contract manufacturer you used. It’s the retail brand selling to consumers that needs to ensure what they are claiming is in there. So imagine you did none of the QC testing we do. How many inaccuracies and variability do you think you would see batch-to-batch AND intrabatch? Vitamin or botanical doesn’t really matter for most of those things. There might be less variability in the raw material batch with vitamins (again, we are assuming). However, the variance in formulation, mixing, and manufacturing is all pretty similar. This speaks nothing to impurities, too. Maybe something has the amount of vitamin C or E it claims, but it also has other shit in there they don’t know about. Maybe residual solvents. Maybe heavy metals. Maybe some synthesis impurity left over.
So that’s a long-winded way of saying: “it depends.” In process variance doesn’t really care about what the raw material is, be it a complex botanical or a simple vitamin. Should there be less variance at the raw material stage for a simple vitamin? Yeah, I would think so. Still, that’s only a small part of what makes a finished product.
On which competitors he respects:
NOW has their own in-house lab, and they have been trying to clean up the standards of the industry for a while. We have had a couple things of theirs fail, but only for content lower than label claim, not fake or impure or anything like that. So while I wouldn’t say trust everything from them 100%, NOW is one of the better ones in this industry seemingly trying to make things better. I also have more trust for Thorne than most […] every single brand we have tested so far has had at least one thing fail, save for Thorne. Some of the brands have almost everything fail. Others like Life Extension, Jarrow, and NOW have most things pass with only some failing. It’s a crap shoot.
He also has a very long and fascinating comment about turkesterone which this margin is unfortunately too small to contain.
Believe it or not, I am restraining myself in terms of how many MYASD comments I post - you can go here for more, including his adventures with payment processors, lawsuits, and cryptocurrencies.
The results from the consumer lab companies seemed very promising - almost everyone was good. MYASD’s experiences seem like the opposite - almost everyone seems bad. How do we reconcile these?
I think some of the difference is in what supplements we’re talking about. MYASD admits that most simple vitamins, minerals, and amino acids are fine. He finds the most problems with tongkat ali, fadogia, and (offscreen) maca and turkesterone. These are all testosterone/libido/strength boosting supplements for men. My guess is that the most reputable companies avoid these, the consumer base is less discerning, and so these are genuinely worse than other products.
That still leaves the mushrooms, milk thistle, Magtein, and several other products he talks about that I haven’t reposted here. We already talked about mushrooms. The milk thistle seems like a special case - it sounds like the industry standard is flawed there. Magtein sounds like companies competing to sell an expensive product for the lowest possible price and having an easy substitution available.
IV. Conclusion
Claims from the mainstream media that most supplements are completely fake and don’t even contain the active ingredient are probably just wrong.
Most simple supplements, including vitamins, minerals, and amino acids, are very likely to have the amount of product shown on the label. A few less reputable brands might differ by 25%, rarely 50%, practically never more than that.
Botanicals are more complicated. Commonly-used botanicals from reputable brands are usually about as trustworthy as vitamins, but there are lots of complications around extraction processes and sometimes you might get 50% more or less than you thought. Less-commonly-used botanicals are less clear; you still will rarely find outright sugar pills, but you may find people bungling the chemistry, not caring too much about exact amounts, or selling mushroom mycelium instead of fruiting body. “Male enhancement” products are their own special class of danger zone, as are anything that’s been featured on The Joe Rogan Experience ; you should be extra careful to buy from only the most reputable companies. I trust Nootropics Depot, Thorne, NOW, and Jarrow, in that order, but you’ll want to do your own research and maybe check ConsumerLab for the particular product you’re buying.
But zooming out: what is the proper dose of the antidepressant Lexapro? Trick question - it depends on the person - I have seen people need anywhere between 2.5 and 30 mg of it. So how do I prescribe it to someone? I usually tell them to start at 5 mg, then go up or down depending on whether it seems to be working, causing side effects, etc. A lot of herbal supplements are similar. If this is your strategy, a 25% labeling error isn’t going to matter much, is it? If my patients get their Lexapro from a sketchy company that actually has only 4 mg in a 5 mg pill, they’re still going to go up to 20 mg or down to 2 mg or whatever it is the end up needing, based on how it affects them. It will cause problems if they ever change brands, but probably not very many problems. And all of these substances have a wide enough therapeutic index that you’re not going to get toxicity from a 25% mislabeling. So although it’s correct for the industry to obsess over getting this right for purposes of honesty and reputation, as a consumer I don’t personally worry about it as much except for rare substances where specific doses are constant between people and make a big difference, or where you’re not supposed to feel anything (eg decreases heart attack risk over ten years) and so you can’t titrate the dose to effects. If it’s just something that’s supposed to make you less anxious, check how anxious you get!