Strenuous 8-Year Effort to Replicate Key Cancer Research Finds an Unwelcome Surprise

The replicability of scientific studies is under the microscope like never before: scientists are increasingly examining just how many studies can be repeated with the same results a second or third time around.

If a study doesn’t pass the so-called replication test, that casts some doubt over the findings; newly published investigations are now indicating we could have a significant replication problem in cancer research.

The research looked at 193 different experiments found in 53 cancer-related papers published in high-profile journals between 2010 and 2012, and found that none of the experiments could be set up again using only the information published. After getting help from the original study authors, 50 experiments from 23 papers were reproduced.

That only a quarter of the experiments could be rerun at all is concerning – some of the original authors never responded to requests for help – but the results showed that these reproduced tests showed effect sizes that were often smaller than what the original studies yielded.

“Of the replication experiments we were able to complete, the evidence was much weaker on average than the original findings even though all the replications underwent peer review before conducting the experiments to maximize their quality and rigor,” says Timothy Errington, Director of Research at the Center for Open Science in Virginia.

“Our findings suggest that there is room to improve replicability in preclinical cancer research.”

What’s more, less than half (46 percent) of the effects measured in the follow-up experiments were able to pass three or more of the five replicability criteria set by the research team. These criteria covered both effect size and the overall positive or negative conclusions drawn in the paper.

“The report tells us a lot about the culture and realities of the way cancer biology works, and it’s not a flattering picture at all,” bioethicist Jonathan Kimmelman from McGill University in Canada, who wasn’t involved in the original research, writes in a commentary on the findings.

As Kimmelman goes on to say, the uncertainty around the conclusions drawn in these cancer studies means that we might be wasting time testing drugs on patients that aren’t going to have any effect on the disease.

There is good news though, which is that scientists are doing more than ever to address and tackle this problem of reproducibility. At least experts are now better aware of some of the issues around clarity and thoroughness when it comes to cancer studies – and that should lead to improvements in the future.

It’s also worth bearing in mind that no experiment can ever be reproduced perfectly a second time around – being unable to replicate a study doesn’t necessarily make the original study wrong or inaccurate.

“A failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability,” the team writes.

One of the questions raised here is what would be an acceptable level of replicability when it comes to cancer research.

While this new research makes for uncomfortable reading, the team behind it is hopeful that the report will lead to less friction in the research process – meaning data that is more openly shared, experiments that are more stringently and transparently run, and so on.

“Science is making substantial progress in addressing global health challenges,” says Brian Nosek, Executive Director at the Center for Open Science. “The evidence from this project suggests that we could be doing even better.”

The research has been published in two studies in eLife, one reporting on the difficulty of replicating experiments, and one analyzing the experiments that were replicated.

Source

Author: showrunner