Now on ScienceBlogs: Will Quantum Fusion Save the Day?

Subscribe for $15 to National Geographic Magazine

Respectful Insolence

"A statement of fact cannot be insolent." The miscellaneous ramblings of a surgeon/scientist on medicine, quackery, science, pseudoscience, history, and pseudohistory (and anything else that interests him)

Who (or what) is Orac?

orac.jpg Orac is the nom de blog of a (not so) humble pseudonymous surgeon/scientist with an ego just big enough to delude himself that someone, somewhere might actually give a rodent's posterior about his miscellaneous verbal meanderings, but just barely small enough to admit to himself that few will. (Continued here, along with a DISCLAIMER that you should read before reading any medical discussions here.)

Orac's old Blog is archived at Archived Insolence.



Add to Technorati Favorites

Search

Recent Posts

Recent Comments

Submit to Skeptical Blog Anthology 2009
award_lr.gif
Winner, Best Health Policies/Ethics Weblog of 2008


The 2008 Weblog Awards

skepchick2008top10.jpg


evolution.gif

Archives

Non-Orac Insolence

Wikio - Top Blogs - Sciences
finalist2007_150x100.jpg
medicalhealth150.jpg
2005 Weblog Award

« "Eat the Sun": Sun-worshiping fantasy versus reality | Main | Suspicion of vaccines among those who should know better »

Preclinical research has a problem, but that doesn't mean religion is better

Category: CancerClinical trialsMedicine
Posted on: April 30, 2012 3:12 AM, by Orac

Remember Vox Day?

Sure, I bet you do, at least if you've been a regular reader of this blog more than a year or two. If you're a really long-timer, you probably remember him even better. Let's just put it this way. Vox is a guy who has a much higher opinion of his intellectual prowess when it comes to science than is warranted by the bleatings that he calls a blog would warrant. I do have to thank him though. Besides giving me occasional material to apply some well-deserved not-so-Respectful Insolence to from time to time, on rare occasions he even points me in the direction of interesting studies. Of course, Vox being Vox and all, he usually completely misinterprets them, but that allows me the introduction I need to dive into the study itself and have a bit of fun puncturing his pretensions at the same time?

Who could ask for more?

Basically, what happened is that a while back Vox saw a news report about an article in Nature condemning the quality of current preclinical research. From it, Vox, as is his usual wont, drew exactly the wrong conclusions about what this article means for medical science:

Fascinating. That's an 88.6 percent unreliability rate for landmark, gold-standard science. Imagine how bad it is in the stuff that is only peer-reviewed and isn't even theoretically replicable, like evolutionary biology. Keep that figure in mind the next time some secularist is claiming that we should structure society around scientific technocracy; they are arguing for the foundation of society upon something that has a reliability rate of 11 percent.

Now, I've noted previously that atheists often attempt to compare ideal science with real theology and noted that in a fair comparison, ideal theology trumps ideal science. But as we gather more evidence about the true reliability of science, it is becoming increasingly obvious that real theology also trumps real science. The selling point of science is supposed to be its replicability... so what is the value of science that cannot be repeated?

No, a problem with science as it is done by scientists in the real world doesn't mean that religion is true or that a crank like Vox is somehow the "real" intellectual defender of science. (I must admit, though, that that line about "real theology" trumping "real science" is a howler.) Later on, Vox doubled down on his misunderstanding by trying to argue that the study he so eagerly gloated over proves that science is not, in fact, "self-correcting." This is, of course, nonsense in that the very article Vox is touting is an example of science trying to correct itself! However, nothing ever seems to stop Vox from laying down serious nonsense whenever he thinks he's found "evidence" that atheists are wrong and science is leading us astray. None of this is surprising, of course, given that Vox has demonstrated considerable crank magnetism, being antivaccine, anti-evolution, an anthropogenic global warming denialist, and just in general anti-science. He's also known for being too much of a crank at times even for WorldNetDaily, as he so aptly demonstrated when he demonstrated incredible ignorance of basic history in his suggestion that Hitler's method of dealing with an unwanted population shows us that it's "possible" to deport 12 million illegal aliens. As I put it in taking down his nonsense, hey, it worked for Hitler.

Unfortunately, Vox is not alone. Quackery supporters of all stripes are jumping on the bandwagon to imply that this study somehow "proves" that the scientific basis of medicine is invalid. A minion of Mike Adams' writing at his wretched hive of scum and quackery, NaturalNews.com, crowed:

Begley says he cannot publish the names of the studies whose findings are false. But since it is now apparent that the vast majority of them are invalid, it only follows that the vast majority of modern approaches to cancer treatment are also invalid.

But does this study show this? Do the findings reported in this article mean that the scientific basis of cancer treatment is so off-base that quackery of the sort championed by Mike Adams is a viable alternative or that science-based medicine is irrevocably broken? Or that, as Vox crowed, even the best science is roughly 90% unreliable?

Not so fast there, O cranks...

A systemic problem with preclinical research? Maybe. Maybe not.

One of the most difficult aspects about science-based medicine (and science in general) to convey to the public is just how messy it is. Scientists know that early reports in the peer-reviewed literature are by their very nature tentative and have a high probability of ultimately being found to be incorrect. Unfortunately, that is not science as it is imbibed by the public. Fed by too-trite tales of simple linear progressions from observation to theory to observation to better theory taught in school, as well as media portrayals of scientists as finding answers fast, most people seem to think that science is staid, predictable, and able to generate results virtually on demand. This sort of impression is fed even by shows that I kind of like for their ability to excite people about science, for instance CSI: Crime Scene Investigation and all of its offspring and many imitators. These shows portray beautiful people wearing beautiful pristine lab coats back lit in beautiful labs using perfectly styled multicolored Eppendorf tubes doing various assays and getting answers in minutes that normally take hours, days, or sometimes weeks. Often these assays are all done over a backing soundtrack consisting of classic rock or newer (but still relatively safe) "alternative" rock. And that's just for applied science, in which no truly new ground is broken and no new discoveries are made.

Real scientists know that cutting edge (or even not-so-cutting edge) scientific and medical research isn't like that at all. It's tentative. It might well be wrong. It might even on occasion be spectacularly wrong. But even results that are later found to be wrong are potentially valuable.

Sometimes moviemakers and TV producers get it close to right in showing how difficult science is. For example I once pointed out how the HBO movie Something The Lord Made showed just how difficult it could be to take a scientifically plausible hypothesis and turn it into a treatment. In most movies, TV shows, and popular writings, the retrospectoscope makes it seem as though what we know now flowed obviously from the observations of scientific giants. Meanwhile, the news media pounces on each new press release describing new studies as though each was a breakthrough, even though the vast majority of new studies, even seemingly interesting ones, fade into obscurity, to be replaced by the next new "breakthrough."

In the real world of science, however, things are, as I said, messy. What amazes me is how two scientists can fall prey to amazement when they point out just how messy science is. I'm referring to a commentary that appeared in Nature three weeks ago by C. Glenn Begley, a consultant for Amgen, and Lee M. Ellis, a cancer surgeon at the University of Texas M.D. Anderson Cancer Center. It is this commentary that got Vox all gloaty and Adams' minion all excited. The article was entitled, unimaginatively enough, Drug development: Raise standards for preclinical cancer research. This article is simultaneously an indictment of preclinical research for cancer and a proposal for working to correct the problems identified. It is also simultaneously disturbing, reassuring, and, unfortunately, more than a little misguided.

Before I get into the article, let me just expound a bit (or pontificate or bloviate, depending on what you think of my opinionated writing) about preclinical research. Preclinical research is, by definition, preclinical. It's the groundwork, the preliminary research, that needs to be done to determine the plausibility and feasibility of a new treatment before testing it out in humans. As such, preclinical research encompasses basic research and translational research and can include biochemical, cell culture, and animal experiments. Depending on the nature of the problem and proposed treatment, it could also include chemistry, engineering, and surgical research.

Now here's the pontification and bloviation. These days, everybody touts "translational" research, meaning research that is designed to have its results translated into human treatments. It's darned near impossible these days to get a pure basic science project funded by the NIH; there has to be a translational angle. Often this leads basic scientists to find rather--shall we say?--creative ways of selling their research as potentially having a rapid clinical application, even though they know and reviewers know that such an application could be a decade away. Indeed, if we are to believe John Ioannidis, the median time from idea to completion of large scale clinical trials needed to approve a new treatment based on that idea is on the order of one to two decades. Moreover, as I've said many times before, translational research will grind to a halt if there isn't a robust pipeline of basic science research to provide hypotheses and new biological understandings to test in more "practical" trials. A robust pipeline is necessary because the vast majority of discoveries that look promising in terms of resulting in a therapy will not pan out. That is the nature of science, after all. Many leads are identified; few end up being a treatment.

Not surprisingly, this nature of science seems to be what concerns Begley and Ellis. It's also Begley and Ellis' spin on it, unfortunately, that gives Vox his opening. They begin by pointing out:

Sadly, clinical trials in oncology have the highest failure rate compared with other therapeutic areas. Given the high unmet need in oncology, it is understandable that barriers to clinical development may be lower than for other disease areas, and a larger number of drugs with suboptimal preclinical validation will enter oncology trials. However, this low success rate is not sustainable or acceptable, and investigators must reassess their approach to translating discovery research into greater clinical success and impact.

Of course, some of the reason that clinical trials in oncology have a high failure rate is no doubt due to the high difficulty of the disease (actually many diseases) being tackled. As I've pointed out time and time again, cancer is very, very complicated and very, very hard. Given that challenge, as frustrating as it is, it is probably not surprising that only around 5% of agents found to have anticancer activity in preclinical experiments go on to demonstrate sufficient efficacy in phase III clinical trials to earn licensing for sale and use. That is compared to approximately 20% for cardiovascular disease. Of course, cardiovascular drugs are targeted at cells that are nowhere near as messed up as cancer cells, and another study cited by Begley and Ellis suggests that between 20-25% of important preclinical results can't be reproduced in pharmaceutical company laboratories with sufficient rigor to go forward. Even so, being scientists, we want to improve the process. To improve the process, however, we need to know where the process fails.

To try to do this, Begley and Ellis looked at 53 "landmark" publications in cancer. Begley used to be head of global cancer research at Amgen and knows what it takes to get a drug from idea to market. What it takes first is replication. Basically, Begley's team would scour the scientific literature for interesting and promising results and then try to replicate them in such a way that their results could serve as a basis for developing drugs based on them. The idea was to identify new molecular targets for cancer and then figure out ways to make drugs to target them. This is what he reported:

Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed 'landmark' studies (see 'Reproducibility of research findings'). It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.

Here's the part that I found to be profoundly misguided. Begley and Ellis basically admit that these are "landmark papers"; i.e., that they were highly novel. Presumably these papers would have been considered at the time of their publication to be "cutting edge" research, very likely published in high impact journals such as Nature, Cell, Science, Cancer Research, and the like. Unfortunately, although I looked, I didn't see a list of the 53 "landmark papers--not even in an online supplement. Nor was the method of how these papers were analyzed described in much detail--not even in an online supplement. The irony inherent in a paper that rails against the irreproducibility of preclinical cancer research but does not itself provide the data upon which its authors based its conclusions in sufficient detail for the reader to determine for himself whether the conclusions flow from the data is left for SBM readers to assess for themselves. Similarly misguided, as was pointed out in the online comments, were the authors' stated assumption that "the claims in a preclinical study can be taken at face value -- that although there might be some errors in detail, the main message of the paper can be relied on and the data will, for the most part, stand the test of time" and their amazement that "this is not always the case." If the authors' assumptions were true, attempts to replicate scientific results would be less important than they are.

Be that as it may, what the authors are studying, however they studied it and whatever the 53 studies they examined were, is essentially frontier science. Given that, it strikes me as rather strange that they are so amazed that much of the science at the very frontiers turns out not to be correct when tested further. I've discussed frontier science versus more settled science in my usual inimitable detail and length before. In fact, i did it six years ago, even before I joined the ScienceBlogs collective; so I point you to that early and, as usual, brilliant bit of discussion. Let's just say that frontier science in high impact journals often turns out to be wrong because, well, it's frontier science.

In fact, I'm guessing that Begley and his team were interested in such papers because they were looking for a leg up on the competition. Begley was the head of a major research division of a major pharmaceutical company. What does that mean? It means that it was his job to find new molecular targets for cancer and to develop drugs to target them. And it was his job to do all this and beat his competitors to the market with effective new drugs based on these discoveries. No wonder his group scoured high impact journals for cutting edge studies that appeared to have identified promising molecular targets! Then he had a veritable army of scientists, about 100 of them in the Amgen replication team according to this news report, who were ready to pounce on any published study that suggested a molecular target the company deemed promising.

Here's another aspect of the study that needs to be addressed. As I read the study, a thought kept popping into my fragile eggshell mind. Remember Reynold Spector? He's the guy whom both Mark Crislip and I jumped on for a particularly bad criticism of science-based medicine and its alleged lack of progress that Spector called Seven Deadly Medical Hypotheses. As both Mark and I pointed out, nearly all of these hypotheses were really not particularly deadly, and, indeed, most of them weren't even hypotheses. What Dr. Spector shares in common with Dr. Begley is a background in pharma, and the similarities in the way they think are obvious, to me at least. For instance, I castigated Spector for throwing around the term "pseudoscience" to describe studies that in his estimation do not reach the level of evidence necessary for FDA approval of a drug. That is a very specific set of requirements for a very specific problem: developing a drug from first scientific principles and then demonstrating that it is efficacious and safe for the intended indication as well as safe. I got the impression from his articles that Dr. Spector views any study that doesn't reach FDA-level standards for drug approval to be pseudoscience -- or, at the very least, crap. I get the same impression from Begley. For example, here's a passage from his article:

Of course, the validation attempts may have failed because of technical differences or difficulties, despite efforts to ensure that this was not the case. Additional models were also used in the validation, because to drive a drug-development programme it is essential that findings are sufficiently robust and applicable beyond the one narrow experimental model that may have been enough for publication.

Elsewhere in the article, Begley defines "non-reproduced" as a term he assigned "on the basis of findings not being sufficiently robust to drive a drug-development programme." This attitude is, of course, understandable in someone running an oncology drug development program for a major pharmaceutical company. He is looking for results that he can turn into FDA-approved drugs that he can bring to market before his competitors do. So what he does is more than just try to reproduce the results as described in the publication. His team of 100 scientists tries to reproduce the results and extend them to multiple model systems relevant to drug design. That is, in essence, applied science. Think of it this way: How many basic science discoveries in physics and chemistry ever get turned into a product? How many of these findings are sufficiently robust and reproducible in multiple model systems to justify a team of engineers to spend millions of dollars developing them into products? Do physicists, materials scientists, chemists, and engineers obsess over how few findings in basic science in their fields can successfully be used to make a product?

I know, I know, apples and oranges. In medicine, those of us doing research do it in order to develop an understanding of a disease process sufficient to develop an efficacious new treatment. It's a very explicit in what we do. However, sometimes we forget just how important it is to have a large, robust pipeline of preclinical results upon which to base translational research programs. Is the reason for the apparently declining percentage of basic science studies that are successfully translated into drugs more a function of the increasing ability of scientists, through large scale genomic and small molecule screens, to identify more and more potential molecular targets and potential drugs to use against them than of scientists doing something wrong? I also have to wonder if what Begley and Ellis are observing is the decline effect accelerated by 100 scientists prowling the scientific literature looking for experimental results they can turn into drugs. As I pointed out before, the decline effect doesn't mean science doesn't work, and, as I will point out here, Begley's very methods would almost be expected to accelerate the decline effect.

The rest of the story

Don't get me wrong. Although I find the premise of Begley and Ellis' article to be misguided, there is important and disturbing information there. Unfortunately, the really important and disturbing information is not in Begley and Ellis' paper, something I find rather important and disturbing in and of itself, as you should too. The omission of these critical pieces of information strikes me as a curious decision on the part of the authors and Nature editors.

For example, in the paper, we learn this:

In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. There are no guidelines that require all data sets to be reported in a paper; often, original data are removed during the peer review and publication process.

This is one reason that when I review papers I always ask if assays were performed in a blinded fashion, particularly when the results involve selecting parts of histological slides for any sort of quantification or any other sort of examination that requires a potentially subjective selection of fields or areas to measure. This is true even for computer-aided image analysis, mainly because the human still has to choose the area of the image to be analyzed.

In an interview, however, we learn a lot more critical information:

When the Amgen replication team of about 100 scientists could not confirm reported results, they contacted the authors. Those who cooperated discussed what might account for the inability of Amgen to confirm the results. Some let Amgen borrow antibodies and other materials used in the original study or even repeat experiments under the original authors' direction.

Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings. "The world will never know" which 47 studies -- many of them highly cited -- are apparently wrong, Begley said.

I find it very interesting that Begley didn't mention this rather important tidbit of information in the Nature paper and why he and Ellis didn't see fit to name names of studies for which non-disclosure agreements weren't signed. One wonders if he (and the Nature editors) were concerned about litigation. In any case, the non-disclosure agreements obviously must predate the Nature paper. This tells me that Begley was in essence complicit in not revealing that his team couldn't reproduce results, apparently not thinking such agreements to be too high a price at the time for access to reagents and help in the cause of advancing his company's efforts. He's willing to admit this in news interviews, apparently, but not in the Nature paper being used as a broadside against current preclinical drug development efforts.

Here's another highly irritating passage from Begley and Ellis' paper:

Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis. More troubling, some of the research has triggered a series of clinical studies -- suggesting that many patients had subjected themselves to a trial of a regimen or agent that probably wouldn't work.

Why do I say this is an "irritating" passage? Simple. It would have been very helpful if Begley and Ellis had actually named a couple of these "entire fields," don't you think? I suppose they probably couldn't do that without indirectly revealing which papers whose results Begley's team couldn't reproduce. The lack of this information makes this jeremiad against how preclinical research is done today far less useful for actually fixing the problem than it might have been. Assessing the irony of a paper railing against current preclinical research methods that does not itself reveal its methods in sufficient detail to be evaluated or even its results except in fairly vague ways is again left as an exercise for you, my readers. Feel free to chime in after this post in the comments.

There are also many explanations for the variability in published research, as has been pointed out by other commentators. For instance, Nobel Laureate Phil Sharp homes in on one problem:

The most common response by the challenged scientists was: "you didn't do it right." Indeed, cancer biology is fiendishly complex, noted Phil Sharp, a cancer biologist and Nobel laureate at the Massachusetts Institute of Technology.

Even in the most rigorous studies, the results might be reproducible only in very specific conditions, Sharp explained: "A cancer cell might respond one way in one set of conditions and another way in different conditions. I think a lot of the variability can come from that."

It's true, too. I remember back in the late 1990s, several labs were having difficulty reproducing Judah Folkman's landmark work on angiogenesis inhibitors, including the lab where I was working at the time. Dr. Folkman provided reagents, protocols, and advice to any who asked, and ultimately we were able to find out what the problem was, part of which was that the peptide we were using was easily denatured. We also learned that he had done the same thing for several labs, even to the point of dispatching one of his postdocs to help other investigators. Now imagine if Folkman had been like one of the scientists who had demanded non-disclosure agreements when Begley's group had trouble reproducing his studies. Angiogenesis inhibitors might have ended up in one of the areas upon which Begley cast doub.t Oh, wait. They might be; we don't know because Begley didn't reveal which papers his team couldn't reproduce to his satisfaction.

Still, the problems with Begley and Ellis' article notwithstanding, they do provide useful information and identify what appears to be a serious problem. The problem is not so much that so few basic science discoveries end up as drugs, courtesy of Amgen or one of its big pharma competitors. Rather, it's the sloppiness that is too common in the scientific literature, coupled with publication bias, investigator biases, and the proliferation of screening experiments done to identify genomic targets and small molecules with biological effects that has turned into the proverbial fire hose of data, often many terrabytes per screen. I also wonder if part of the problem is that all the "easy" molecular targets for therapy have already been identified, leaving the difficult and problematic ones. The result is alluded to but not adequately discussed in the news story I cited above:

As recently as the late 1990s, most potential cancer-drug targets were backed by 100 to 200 publications. Now each may have fewer than half a dozen.

The genomics, proteomics, and metabolomics revolutions that have occurred over the last 10-15 years are largely to blame for this. I would also argue (and perhaps Begley would even agree) that the competitiveness between pharmaceutical companies to be the "firstest with the mostest" for each new target hyped in the medical literature almost certainly contributes to this problem. After having been burned a few times, Begley could, for instance, have decided that his team wouldn't seize on each of these new papers, that he'd wait until some more papers were published. He didn't do that. For him, business as usual continued. An admission that he was part of the problem, either in the Nature paper or one of the interviews he gave to the press, would have been nice. Instead, his article stinks to high heaven of blaming the other guy for his failures. After all, it's not the basic scientists' fault that Begley and his team at Amgen didn't wait until there was more replication before trying to make drugs out of newly described molecular targets.

How to improve preclinical research

It's true that I've been critical of Begley and Ellis' article, but that's mainly because of frustration. There are many things that need to be improved in terms of how science is applied. Readers might recall that I've written about problems with the peer review system, publication bias, the decline effect, and numerous other problems that interfere with the advancement of science and contribute to doubts about its reliability. Such problems are inevitable because science is done by humans, with all their biases, cognitive quirks, and conflicts of interests, but that doesn't mean every effort shouldn't be made to minimize them. Science remains the single best system for determining how nature works, and, no matter how much quacks and cranks might try to cast doubt on it because it doesn't support their pseudoscience, no one has as yet developed a better system. When Vox takes a look at this study and concludes that it means that science is so broken that theology trumps it, all I can do is laugh at the utter idiocy of his antics.

The question, therefore, is how to minimize the effects these problems have on how the scientific method is practiced, particularly given that the scientific method itself is designed to try to minimize the effects of human shortcomings on how evidence is gathered and analyzed. No matter how much cranks like Vox Day and Mike Adams' minions try to portray Begley and Ellis' article as an indictment of science itself, as slam dunk evidence that that science is not self-correcting and the scientific basis of cancer therapy is so much in doubt that quackery is a viable alternative or that religion is a more reliable way of seeking knowledge about the world than science, it is in fact nothing of the sort.

It does, however, tell us that we as scientists need to improve, and, indeed, we at SBM have discussed the shortcomings of medical science and ways to improve upon it on many occasions. In fact, I daresay that much of what we say jibes with the suggestions proposed by Begley and Ellis, including:

  • More opportunities to present negative data.
  • An agreement that negative data can be as informative as positive data.
  • Requiring preclinical investigators to present all findings.
  • Links added to articles to other studies that show different or alternate results.
  • Transparent opportunities for trainees, technicians and colleagues to discuss and report troubling or unethical behaviours without fearing adverse consequences.
  • "Greater dialogue should be encouraged between physicians, scientists, patient advocates and patients. Scientists benefit from learning about clinical reality. Physicians need better knowledge of the challenges and limitations of preclinical studies. Both groups benefit from improved understanding of patients' concerns."
  • More credit for teaching and mentoring.
  • Less emphasis on publication in top-tier journals.
  • "Funding organizations must recognize and embrace the need for new cancer research tools and assist in their development, and in providing greater community access to those tools. Examples include support for establishing large cancer cell-line collections with easy investigator access (a simple, universal material-transfer agreement); capabilities for genetic characterization of newly derived tumour cell lines and xenografts; identification of patient selection biomarkers; and generation of more robust, predictive tumour models."

Many of these are good ideas, although I'm not sure how practical it would be to require that investigators present "all" findings in journal articles and how such a requirement would ever be enforced. Defining "all" would be a challenge, and online supplements are already too much of a dumping ground these days. For example, does "all" mean investigators have to present the dozens of attempts it might have required to optimize assay conditions or include every experiment that was screwed up because someone used the wrong conditions or added the wrong reagent or left their tubes sit on the bench too long? Also, one notes how Begley assiduously avoids criticizing pharma for being so eager to leap on the latest cutting edge research before it has percolated through the literature, which, I conclude based on his very own complaint, is surely part of the problem.

So is the very nature of science. Scientists know that what is published the first time is considered tentative. It may or may not be correct. We also know that publication bias can mean that the first publication of a result might well be an anomaly that was published because it was interesting. That is science at the frontier. If other scientists can replicate the results or, even better, replicate the results and use them as a foundation to build upon and make new discoveries, only then does such a result become less frontier science. And if the results are replicated enough times and by enough people and used as a basis for further discoveries, to the point that they are considered settled results, that's when they become applied science, such as a drug based on the principle originally discovered. It's a process that is very messy and with lots of dead ends and blind alleys that go nowhere. While performing a valuable service that identifies problems with a lack of reproducibility in all too many preclinical cancer research studies, Begley and Ellis also unfortunately contribute to the mistaken impression that translational research is a linear process that goes from discovery to drug. It's not, nor can it churn out major new treatments on demand.

More importantly, this self-justifying bit of pharma apologia does not, as Vox Day and Mike Adams claim, invalidate the scientific method. If anything, it demonstrates that science is self-correcting and that scientists are willing to engage in self-criticism and self-analysis in a way that religion is not.

Share on Facebook
Share on StumbleUpon
Share on Facebook
Find more posts in: Medicine & Health

Comments

1

Even if all modern medicine was proven baseless and as bad as the worst of woo, it wouldn't mean woo in any of its forms was suddenly right, it would just mean we'd have to start again.

People who believe in conspiracies and magical healing don't seem to understand this, they seem to think that all you have to do is point out a flaw (which they usually fail at anyway) and suddenly everyone has to believe in their particular brand of nonsense.

It doesn't work like that.

Posted by: nastylittlehorse | April 30, 2012 7:43 AM

2

Keep that figure in mind the next time some secularist is claiming that we should structure society around scientific technocracy; they are arguing for the foundation of society upon something that has a reliability rate of 11 percent.

Seems like the technology most offensive to some folks is the good ol' irony meter. Someone who types on his computer that we should reject "technocracy" with a "reliability rate of 11 percent...." Would be interesting to see what Day's message would look like if only 11% of what he typed made it through. (Kinda reminiscent of PZ's "disemvowelling.")

[I]t is becoming increasingly obvious that real theology also trumps real science. The selling point of science is supposed to be its replicability... so what is the value of science that cannot be repeated?

'Cause everyone knows theology is big on replicability. How's that Second Coming, er, coming?

Posted by: Jud | April 30, 2012 8:05 AM

3

Here's a thought (probably completely impractical): Don't publish frontier science in a peer-reviewed journal until another lab has performed the experiment and attained its own provisional conclusions, either pro or contra those of the first lab.

Posted by: Jud | April 30, 2012 8:22 AM

4

I wonder if the large number of "wrong" results might largely be due to regression to the mean?

Suppose a lab tries 100 things, and one of the 100 appears to be very promising because of some kind of fluke -- it's nothing, really, but there's a fluke. Such a result is not improbable, given that there were 100 things that were tried.

That flukish, promising result is the one that gets published in the high-impact journal. And of course, it won't be reproducible.

Results will sometimes be correct, or even useful, but if this thought experiment describes reality, one can expect to see a large number of false positives.

Posted by: palindrom | April 30, 2012 8:40 AM

5

As Albert Einstein said: If we knew what we were doing, it wouldn't be research.

Experimental particle physics puts a 5σ threshold on declaring an experiment to be significant evidence of a new particle or new physics. There are two reasons for this. One is because they can: these experiments accumulate enough data that they can rule out alternatives at that confidence level. The other is, as palindrom@4 says, mean reversion: there have been many instances of apparent discoveries at the 3σ level which did not hold up as more data were collected.

Preclinical research must work with human subjects, so they cannot accumulate enough data to insist on the 5σ standard. It is hard enough for such studies to get something at the 2σ level (which is about 5%, whence the emphasis on P < 0.05 as a threshold). And of course some of this literature involves case studies, for which statistics don't even come into play. Reversion to the mean is still operative, however, so many of these studies will fail to hold up as more experimental data accumulate.

Posted by: Eric Lund | April 30, 2012 9:59 AM

6

@ palindrom:

That is exactly what happens, all the time. It's one form of what's called "publication bias," and is well-recognized as ubiquitous.

Posted by: Beamup | April 30, 2012 10:15 AM

7

@beamup -- I probably shoulda knowed that, but "IANABMS" (I Am Not A BioMedical Scientist).

I will remember the term "publication bias". I suspect it's operating with a vengeance over in woo-ville, giving ammo (all duds of course) to Dana Ullman over at the Wretched Hive.

Posted by: palindrom | April 30, 2012 10:31 AM

8

Never to be outdone in the crankery dept, Gary Null was all over this last week:**

his comments on the article and on Benson's, were preceded by a glowing description of *Nature* as being the most respected science journal in the world, its long history, etc: this is often the case, he cites "prestigeous journals" whenever they print any type of criticism.***

Of course, this is standard *mise en scene*: he is after all, a scientist, researcher and academic- putting his wisdom "into lay language". He enlightens his audience about the mysterious ways of science, portraying himself as an 'insider' who is revealing how truly corrupt and despicably perverted the corporatised process has become: Science, it would seem, is the Bounty and he is chief mutineer.

Articles like this are glued into his accusatory pastiche: calumny designed to frighten the audience away from SBM, simultaneously maligning governmental regulation and demon-infested media. His programming proceeds:
basically, all research is tainted;
pharma funds and orchestrates studies;
universities are plied with money to produce desired results,
governments are bought and paid for by ALEC ( in the UK: substitute 'Murdoch')

We can expect several long-winded exposes soon: on cancer ( 185 pages), vaccines ( Chapter CXVII?) and the myth of mental illness- or another cribbed title ( CCRH will approve).

**( not sure which day or show: he has several other shows all archived @ Progressive Radio Network- I scan most except the overnight monstrosity: I'm thorough, not a masochist)
*** one day the BMJ is a respectable source, the next its editor is vilified.

Posted by: Denice Walter | April 30, 2012 11:02 AM

9

If 88% of pre-clinical research is wrong, isn't there a good chance that Begley's research is wrong?

Posted by: Dianne | April 30, 2012 11:17 AM

10

The methods section of papers seem to be one area that gets chopped down when editing for length. A pity, really, since seemingly insignificant information may get dropped when it could have a huge impact on how the experiment goes. For example, a paper might say that X method was used for rinsing the samples, but it may leave out temperature, whether the rinsing was done before or after another minor step, etc. With different people, even in the same lab, doing the rinse process just slightly differently, the results can be dramatically different.

Posted by: Todd W. | April 30, 2012 12:44 PM

11

It might be possible to take you seriously if you didn't spend half your article spitting vitriol instead of stating facts. But hey, if you can't actually argue facts, throw names instead. Much easier.

Posted by: Aaron | April 30, 2012 1:16 PM

12

Concern troll is concerned.

Posted by: Chris | April 30, 2012 1:21 PM

13

I haven't yet read the article but wonder where are Begley's 53 studies with his (un)reproduced results published? He did the experiments, he can't have been made by himself to sign a Non-disclosure agreement. If they were ever written up and rejected by journals shouldn't he be villifying the journals for (as other commentators have said earlier) publication bias?

If he hasn't bothered trying to get them published shouldn't he be flagellating himself too? He could even have put them in an online supplement thus alleviating some of the problems he's railing against.

Posted by: Anonymous | April 30, 2012 1:29 PM

14

if the troll can't refute with facts, throw vitriol and names....how ironic.

Posted by: Lawrence | April 30, 2012 1:33 PM

15

(I am not a research scientists)...but wouldn't the list of the 53 "landmark studies" that Begley chose for his research team at Amgen, to due further research on, be considered *corporate secrets*?

I would presume that Amgen does not want other drug manufacturing companies to know which "landmark studies" occupied their 100 researchers for the time it took to thoroughly investigate...then discard them...as not practical to develop a drug.

Aren't we talking about the corporate culture here, where no other drug manufacturer should benefit from Amgen's research...in order to pursue a totally different set of "landmark studies"? Isn't this the reason why when key staff leave a company, in order to *encourage* them to keep company secrets, they sign a "non-compete" clause that stipulates no employment for ~ 2 years at a *competing company*, before they are awarded their sweetheart termination package?

I'm basing these comments on what my daughter has told me about CTOs and upper management, leaving brokerage houses and large investments banks.

Posted by: lilady | April 30, 2012 2:29 PM

16

One of the reasons I left academe was seeing careful research, well-reproduced, with a variety of possible alternative explanations investigated, published late in low-tier journals, with exciting, prelimiary, and not so thorough work being published in top-tier journals.

Basic research is absolutely critical to our advancement as a species, but I think the current way publication is done is a bit... out of hand. And I don't know how to fix it.


"Preclinical research must work with human subjects"

Cell lines and 'lower' animals, like mice, hence pre-clinical. You can get the 'n's, you just need the will and the money and the time.

Posted by: Roadstergal | April 30, 2012 3:56 PM

17

That flukish, promising result is the one that gets published in the high-impact journal. And of course, it won't be reproducible.

This is probably why certain high-impact journals have a policy of refusing to publish papers that were unable to replicate earlier papers.

Posted by: herr doktor bimler | April 30, 2012 6:11 PM

18

Good old Vox Day. Always wrong and never, ever in doubt. We used to kick him around now and then on the Sadly, No! blog. His ignorance is both wide and deep.

Posted by: Candy | April 30, 2012 9:42 PM

19

I tire of these atheist/science/religion issues that get thrown around in the media. Most of the religious/spiritual crowd I run into are happy that science exists. Science extends the lives of our favorite aunts diagnosed with cancer and brought us high-definition TV. While you can shoot down the existence of God with a logic problem, no one can actually say for certain that God does or does not exist. There are folks who feel certain God does not exist, people who are certain that God exists, people who go back and forth, people who don't care, etc. A lot of these people have learned to allow beliefs that may contradict each other to coexist peacefully in their heads and enjoy Monday night football on their flat screen. Yeah, yeah, you can argue that following that logic is the same as staying neutral on the existence of unicorns. But unicorns don't get people out of bed in the morning, deities do, and that's fine by me as a mental health pro sort so long as those deities aren't telling people to jump off a bridge or start a suicide cult.

Discounting preclinical trials because they're...preclinical...is absurd. Perhaps we should start looking over our compulsory education curricula?

It's always the idiots who are the loudest...

Posted by: Sophia | April 30, 2012 11:21 PM

20

My old rule of thumb for a paper in Nature is: "interesting but wrong". Nature makes such a big deal about pushing the envelope for "high impact" novel (unique?) ideas but does not having the greatest editors. Their news and views in particular tend to be hack jobs- often sounding an alarm (like this one) with erroneous facts and often destructive consequences.

(I have published several papers in Nature- but they are only slightly wrong).

Posted by: spike | April 30, 2012 11:26 PM

21

The Reuters article quotes the Amgen researcher Bageley: "... we became convinced you can't take anything at face value."

I thought that it was normal in scientific research not to take things at face value, and to replicate research.

Gary (GH)

Posted by: Ivan Ilyich | May 1, 2012 12:21 AM

22

Sophia @19 is NOT me.

Posted by: sophia8 | May 1, 2012 4:14 AM

23

@17 Herr Doc

Which journals require replication? I have heard that some high-impact journals refuse to publish replications. In a field I sometimes follow, the Journal of Personalty and Social Research apparently has this as a policy.

In any case I am not impressed with the fact that there is no list of the papers. Why would anyone believe the authors. This seems to show a very low editorial standard at Nature.

BTW totally off topic (or perhaps not?) there is a great posting at Retraction Watch about an interesting paper that was, obviously, retracted Paper with no scientific content

Is it a reverse-Sokal or a real screw-up in the review process? Given some of the urls, I think I'm voting for the Sokal explanation

Posted by: jrkrideau | May 1, 2012 1:07 PM

24

It seems to me that preclinical studies are a form of screening ideas to see which are worth giving the more diagnostic test of a clinical trial. The 88% quoted is simply the false positive rate, which is not an unusual figure in screening programs.

When I was involved in prenatal screening for Down Syndrome our false positive rate was much higher than 88% (about 5% of women would screen positive and be offered the diagnostic test of amniocentesis, but only 0.1% of pregnancies were Down pregnancies, about 75% of which would be picked up by the screening test, making the false positive rate 98.5%).

I wondered if this might be a useful way of thinking about preclinical studies, since we have ways of assessing how to balance false positives (ideas that fail to achieve their promise) against false negatives (ideas that are prematurely and wrongly discarded) in screening programs. Just a thought.

Posted by: Krebiozen | May 1, 2012 5:59 PM

25

Which journals require replication? I have heard that some high-impact journals refuse to publish replications.

I was thinking of the J.Pers.Soc.Pysch. rejecting papers that criticised Bem's paper on ESP, because they were failures-to-replicate. JPSP has an explicit policy against scientific self-correction.

Posted by: herr doktor bimler | May 1, 2012 6:21 PM

26

a policy of refusing to publish papers that were unable to replicate earlier papers.

To clear up my sloppy language @17: when I wrote "unable to replicate", I meant "unable to replicate the results". The rejected papers I had in mind (possibly also in jrkrideau's mind) were replications in that they repeated Bem's experiment, but they did not repeat the positive results.

Posted by: herr doktor bimler | May 1, 2012 6:29 PM

27
But unicorns don't get people out of bed in the morning, deities do, and that's fine by me as a mental health pro sort so long as those deities aren't telling people to jump off a bridge or start a suicide cult.

Or shoot abortion doctors.
Or condone slavery.
Or justify terrorism.
Or commit suicide bombings (I wish they'd jump off a bridge).
Or start wars.
Or discriminate against women.
Or discriminate against gay people.
Or justify sheltering pedophiles.
Or obstruct science.
Or obstruct medical research.
Or...

Posted by: Stu | May 1, 2012 6:48 PM

28

Interesting blog, thanks. I wonder what Otis Brawley, M.D., chief medical and scientific officer, American Cancer Society, would add to this conversation, esp. in light of his remarks at starting at 8:45 of http://www.youtube.com/watch?feature=player_embedded&v;=3ho_LMBiHVg#!

Posted by: hmmmm | May 2, 2012 2:41 PM

29

@hmmmm,

An interesting talk.

I listened to about 5 minutes starting at the 8 minute point.

He described our current health care system as "corrupt", but didn't say what that really meant. Then he explained we are all of us, doctors, hospitals, insurance companies, lawyers, patients, responsible for that.

Then he gave the example of a man dying of stage IV metastatic cancer whose family insisted on giving him "everything" to keep him alive 6 weeks longer with the last 4 weeks being comatose on monitored life support until his body just gave up and died anyway.

I'll try to listen to the rest of the talk later, but I would guess he would argue for a system that provides at least adequate health care for all citizens, educates the patients and providers about the best choices available while discouraging extravagant expenditures when there is really no hope left. I also think he would strongly support more research to provide better options during all stages of diseases.

But, I doubt he would argue for cutting off the pipeline of phase I research just because most of it doesn't pan out.

Posted by: squirrelelite | May 2, 2012 3:41 PM

30

Also, I don't think he would agree with one commenter who stated:


its so corrupt that the man who has a cure for cancer Dr Burzynski, has been hounded relentlessly by the FDA and government, they tried to bankrupt him, discredit him, steal his cure and even JAIL him. they have cost him millions defending himself. ENOUGH IS ENOUGH

Posted by: squirrelelite | May 2, 2012 3:43 PM

31

"He described our current health care system as
"corrupt", but didn't say what that really meant."

Actually, he did. He spoke of how the application of science in medicine is not rigorous. An example was of how prostate cancer screening introduced nationwide in 1990 never had any evidence to support the claim made by doctors that it prevented incidences of prostate cancer. Yet it was advised for all men over 50. A study never appeared until 20 years later... In 2010... That said screening might prevent incidences. It was of questionable quality and also released with another study that concluded that prostate screening does not prevent incidences of prostate cancer.

Posted by: hmmmm | May 2, 2012 10:40 PM

32

"But unicorns don't get people out of bed in the morning, deities do, and that's fine by me as a mental health pro sort so long as those deities aren't telling people to jump off a bridge or start a suicide cult.
Or shoot abortion doctors.

Yep, atheists never kill people they disagree with

Or condone slavery.

Or condone slavery... *goes to look at historical atheists*, *ohshit*

Or justify terrorism.

Yep, because Muslims are prominent terrorists no one except the religious has ever committed or justified terrorism. Logic FTW!

Or commit suicide bombings (I wish they'd jump off a bridge).

Ditto Suicide bombings! You probably think Muslims invented this too?

Or start wars.

Yup. There's no secular reasons for doing this, and religious people *never* avoid violence because of their religious beliefs so this is a good and logical point.

Or discriminate against women.

Yup, Muslima, there are no atheists with a problem with women. Now get back in your elevator.

Or discriminate against gay people.

All discrimination against gay people has been from the explicitly religious.

Or justify sheltering pedophiles.

...don't know about the paedophile problems in the school system, childcare and sports bodies? Nah, you're right, atheists don't play sport.

Or obstruct science.

Lysenkoism, atheist antivaccinationists, etc

Or obstruct medical research.

Yep, all animal rights activists are religious.

Or... been a peurile twat on a blog.

Nope, atheists and theists do all of these things.


Posted by: Ender | May 3, 2012 7:07 AM

33

Enders @32: "Or condone slavery... *goes to look at historical atheists*, *ohshit*"

Now which historical atheists would those be? It hasn't really been safe to be an atheist historically so I'm curious as to what society had both legal slavery *and* a majority -- or even a noticeable minority -- of atheists.

Posted by: LW | May 3, 2012 7:15 AM

34

I said nothing about historical atheist societies. There pretty much were none. Just individual atheists and a human history replete with slave ownership at all levels of societies (including sometimes slaves themselves)

As far as Atheist societies go, the only ones that I can name were also communist and very much approved of forced labour, re-education and what is essentially slavery. (Being forced to work without wages, without freedom or choice, just without the potential to be sold, just "re-assigned".)

There is nothing inherent to Atheism that protects against any kind of abuse. This is not an attack on atheism or atheists.

Posted by: Ender | May 3, 2012 8:17 AM

35

Ah, so I suppose you can name some few historical atheists who lived in non-atheist (in other words, religious) societies that practiced slavery, and who "condoned slavery" in some fashion other than simply acting like their religious neighbors?

Posted by: LW | May 3, 2012 8:39 AM

36

"and who "condoned slavery" in some fashion other than simply acting like their religious neighbors?"

Why should it make a difference if their neighbours also practised slavery? I'm not sure you know what 'condoned' means. If you keep slaves, you condone slavery. There's no point pointing at religious people nearby and saying "they keep slaves too so therefore I do not condone slavery despite keeping all these slaves"

Posted by: Ender | May 3, 2012 8:51 AM

37

I see, so your position is that atheists aren't any better people on average than religious people on average. Glad we cleared that up.

Posted by: LW | May 3, 2012 9:47 AM

Post a Comment

(Email is required for authentication purposes only. On some blogs, comments are moderated for spam, so your comment may not appear immediately.)











ScienceBlogs

Search ScienceBlogs:

Go to:

Advertisement
Follow ScienceBlogs on Twitter

© 2006-2011 ScienceBlogs LLC. ScienceBlogs is a registered trademark of ScienceBlogs LLC. All rights reserved.