Friday, October 25, 2013

Scientific research is in a serious crisis

This weekend's column from the Kingston Whig-Standard.

A short while ago I wrote a column attacking homeopathy and other pseudo-scientific solutions to serious health problems. Not unexpectedly my Twitter feed gathered followers among friends of homeopathy, slamming me variously for not reporting the amazing scientific research supporting the use of homeopathy as well as for reporting uncritically mainstream science’s take on the issue.
Well, as to the former, there is zero evidence that I am aware of, to date, that homeopathic concoctions work. There is no homeopathic ‘remedy’ that will protect you against the flu, for instance.
As to the second point though, while there is no alternative to scientific research, not all is well in the scientific research enterprise either. And it’s not, as the friends of homeopathy would have us believe, one big conspiracy instigated by nasty pharmaceutical multinationals. It turns out, there is a lot of scientific evidence — how ironic — accumulating that things are going pretty badly wrong due to the way scientific excellence is measured today by folks in the business of ranking university excellence, as well as research funders, governments and sadly many administrators in the academy.
Scientific knowledge relies fundamentally on evidence that can be replicated. Say, I undertake a clinical study with a particular experimental agent and it turns out that this agent works — by some standard — better than an alternative drug given to trial participants, my trial could eventually provide sufficient evidence for the experimental agents to become an approved drug.
Of course, much relies on my trial having been methodologically sound, my statistical analysis of the results having been correct and so on and so forth. Typically I will publish my results in a scientific journal where my data and analysis are scrutinized by specialist peer reviewers. These reviewers are tasked by the journal’s editors with checking that my trial was ethical, that it was methodologically sound, and that my conclusions are actually supported by my data. Of course, even the best reviewers make mistakes, they might not have the competencies to evaluate the relevant stuff and so bad science slips through and gets published.
Sadly we have plenty of evidence for the failure of large numbers of peer reviewers to pick even the most basic errors in scientific manuscripts. This might have to do with the fact that they are typically expected to volunteer their time and that their university employers usually don’t give them credit for this sort of work. In fact, as a journal editor, I can tell you that frequently the most seasoned academics refuse to undertake these vital reviews because that work doesn’t add to their CV, peer recognition, name it.
Well, the error control mechanism that science is based on is that someone will try to replicate scientific studies and then the erroneous research studies will come to light. It really is trial and error. That’s the theory. The practice is that very many, if not most, scientific studies are not replicated. The reason for this is that there is little incentive by research funders to do so. The buzz word is ‘innovation.’ Oh yes, the academy has not remained above the vacuous babble of modern management talk. We all want to be ‘excellent,’ ‘innovative’ and ‘path-breaking’ at pretty much everything that we do.
In fact, our research funders expect no less of us. You are not innovative if you simply check whether someone else has done a proper job. You don’t have to be a scientist to realize how foolhardy such funding policies are. Some efforts at replicating so-called landmark studies have been made. The British magazine The Economist reports that only six of 53 landmark studies in cancer research could be replicated. Another group reported that it managed to replicate only about 25% of 67 similarly important studies.
The good news is that this has been done at all. The bad news is that this isn’t standard fare in the sciences, biomedical or other. Verification of other scientists’ research just isn’t a good career move in a research enterprise that doesn’t value such vital work.
Another problem is that scientists are demonstrably reluctant to tell us if their experiment failed. The reason could be that a commercial sponsor doesn’t want the world to find out too quickly that one of the ‘promising’ drugs in its pipeline is actually a fluke. Commercial confidentiality agreements stand in the way of serving the public interest. Shareholder interests typically trump the public good, and scientific researchers more often than not collude. The likely outcome of this situation is that at some point someone else will test the same component again. Time and resources are therefore wasted. So-called negative results currently are featured in only about 14% of published scientific publications. Of course, in reality the odds are that very many more research studies fail. Scientific progress, to a large extent, depends on failure.
Alas our current systems don’t reward the reporting of failure. Academic journals have their quality measured by a foolish tool, devised in Canada, called Impact Factor. Basically this tool measures how frequently articles published in a journal are cited over a two-year period. Obviously, you won’t be able to become a high-impact journal with papers that report failed studies. People rarely cite such results. Accordingly many researchers don’t even submit such important study outcomes. Don’t we all just love success? Have you ever seen a university marketing department celebrating a researcher’s failure? Me neither. It’s not how we roll in the academy. To be fair to the marketing folks, not many media outlets would report Professor C. Ancer’s failure to replicate a landmark, breakthrough cancer study, despite the fact that the much-reported landmark study has so been shown to be of questionable quality, if not outright flawed.
Some efforts are currently under way to register all trials and to ensure that outcomes are reported, if not in scientific journal then in some other forum that’s easily accessible. The same holds true for the raw data gathered in a trial. Still, progress on this front is far from satisfactory.
New commercial publishing models in the academy further aggravate some of the problems just described. A new business model called Open Access relies overwhelmingly on authors (as opposed to subscribing university libraries) paying for the publication of their work. The journals’ commercial success depends on uploading as many academic outputs to their webservers as possible. The more they publish, the higher their profit. Recently a science journalist submitted an error-ridden manuscript to 304 such Open Access journals. A total of 157 accepted it happily for publication, subject to the article upload fee I mentioned earlier. From there said paper would have gone straight into relevant biomedical databases as a peer reviewed paper. On the face of it, a sound scientific publication.
Sadly, due to the publish-or-perish mentality that isn’t a myth in the academy, quite a significant number of academic researchers engage in academic misconduct in some form or shape. One recent survey reports that about 28% of scientists know of researchers who engage in scientific misconduct in the research they undertake. It is not clear whether all of that misconduct necessarily translates into fraud or useless research outputs, but a significant amount of it almost certainly does.
I could go on in this vein for quite a while because there is plenty of dirt to be found where there is scientific research. It is high time universities and researcher funders have a serious look at the kinds of systems they have created to measure and incentivize research activities. It does appear to be the case that what is in place currently incentivizes unethical conduct to a significant extent. That must change.
And yet, keeping Winston Churchill’s dictum in mind that ‘democracy is the worst form of government except for all those other forms that have been tried from time to time,’ much the same can be said for scientific research. It is the best we’ve got, but that shouldn’t stop us from fixing the problems we are aware of.
Udo Schuklenk holds the Ontario Research Chair in Bioethics at Queen’s University, he is a Joint Editor-in-Chief of Bioethics the official publication of the International Association of Bioethics, he tweets @schuklenk

Ethical Progress on the Abortion Care Frontiers on the African Continent

The Supreme Court of the United States of America has overridden 50 years of legal precedent and reversed constitutional protections [i] fo...