Well, not entirely wrong, just the majority of it. And this is being said by a long time, well respected medical researcher.
In an article released this month by The Atlantic titled Lies, Damned Lies and Medical Science, journalist David H. Freedman relates the story of researcher John Ioannidis (pronounced yo-NEE-dees). Ioannidis has spent most of the last two decades rebutting claims made by other researchers and has made a career out of analyzing others studies and their various findings and has come to some rather shocking conclusions. Ioannidis even released a paper, published in PLoS Medicine in 2005 stating 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. Ioannidis charges that as much as 90 percent of the published medical information that doctors rely on is flawed.
I am going to provide a few quotes, but really you should just read the whole thing. Freedman did an excellent job and you would be missing out by not reading this article. All bolding was done by me.
“I take all the researchers who visit me here, and almost every single one of them asks the [oak] tree [at the old Greek oracle site] the same question,” Ioannidis tells me, as we contemplate the tree the day after the team’s meeting. “‘Will my research grant be approved?’” He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack. And sure enough, he goes on to suggest that an obsession with winning funding has gone a long way toward weakening the reliability of medical research.
That grant money affects not only the types of things that are studied but their outcomes is obvious. If you want to study squirrel mating, you can’t just say that, you have to couch it in terms that will win funding, like say the affects of Global Warming on said squirrels, and then prove that GW is affecting them. This will ensure that you get the initial funding and then that after proving the “correct” fact you will ensure further funding
But beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research. “Randomized controlled trials,” which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time. “I realized even our gold-standard research had a lot of problems,” he says. Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.
This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
As above. If the man paying your check wants X, if you like collecting a paycheck you will give him X. Its really that simple. That happens in every job across the entire world. And as we have more scientists than funding and its been that way for some time, there is no doubt a strong sense of self preservation among those working in the field.
In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right.
My favorite studies are those where the researcher admits that they went into the study expecting X but came out with an answer of Y. To me that means that they overcame their motivation to prove their theory right and instead followed the evidence to a conclusion. Working in IT I see similar things every week at the least. You may get a report and “know” just what the problem is, but once on the server you just have to start following the evidence or else you will be trying to fix something that isn’t the issue.
[Ioannidis] zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. … Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated.
41% of the most widely cited studies were incorrect? That is an amazing percentage. That means that all the research that was done based upon these 49 articles was also incorrect. How many treatments were wrongly given or denied based upon this false research? And how much money was spent in the process?
“Often the claims made by studies are so extravagant that you can immediately cross them out without needing to know much about the specific problems with the studies,” Ioannidis says. But of course it’s that very extravagance of claim … that helps gets these findings into journals and then into our treatments and lifestyles, especially when the claim builds on impressive-sounding evidence. “Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”
Invested their careers. Thats key. Its not an easy thing to change course, and when you have invested your career and your name on something you want to see it succeed, even if it is incorrect (but still likely believed to be correct in that person’s mind)
Nature, the grande dame of science journals, stated in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” What’s more, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.
Most journal editors don’t even claim to protect against the problems that plague these studies. University and government research overseers rarely step in to directly enforce research quality, and when they do, the science community goes ballistic over the outside interference. The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it
Except they don’t. WOW. Science fetishists claim that “science is self correcting” and in theory it is just that. But in practice, it is not. No one goes back over the results to see if they are true. There isn’t enough money or other resources to do it. And when an “outsider” tries to point out errors or bias they get beat down by the collective “science community.” There isn’t a whole lot that the remainder of the world can say about science, in the current climate, to correct its course. If you need proof look at what happens if you dare claim that fetal stem cell research is both wrong and a waste of time? Despite nearly a 100 to 0 ratio of adult stem treatments to fetal stem cell treatments all hell will break loose. Or you could just say out loud that Anthropogenic Global Warming is a fraud, but if you do make sure you are near cover.
Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case for at least 12 years after the results were discredited.
12 years is a long time for misinformation to be floating about. Most industries could not get away with that. That the medical community can is likely just testimony to the amazing self-healing power of the human body.
Medical research is not especially plagued with wrongness. Other meta-research experts have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right).
ALL fields of science? Thats quite a long way from the science fetishists view of science. But of course it does fit quite well with the view of human’s as fallible.
I suspect Vox Day will have a heyday with this article once he finds it. He has been pointing out these same problems for quite some time. Dave over at Hawaiian Libertarian has also had some things to say about medical science, particularly on the nutrition side of the house.