As we move into the “spooky season” of the year this Halloween, the Center for Truth in Science thought it might be interesting to dive into some of the scary headlines about scientific publications we have noticed.
Let’s begin our spooky walk through the headlines with a news story we featured from the prestigious journal, Nature titled: ‘Doing good science is hard’: retraction of high-profile reproducibility study prompts soul-searching. It turns out, the problem of scientific misconduct extends to highly respected journals, which retracted a high-profile study about improving the soundness of scientific studies through reproducibility – a core tenant of good science. (As I intimated in the headline, you can’t make this stuff up.) According to the article, efforts by researchers to strengthen modern scientific pursuits have, unfortunately, ended up causing more concern with bias, fraud, and a lack of truth in science.
Potentially more alarming, even trusted sources like the Cochrane Collaboration have been affected by unreliable studies, leading to potentially harmful medical recommendations. As was reported by the Economist last year, a 2018 recommendation to give steroid injections to women undergoing elective Cesarean sections to improve babies’ breathing was found to be based on a review that contained three studies with unreliable results (discovered after the Cochrane article was published). The organization published a revised review in 2021, leaving out these three studies which had conclusions that were uncertain. This ultimately led the Cochrane Collaboration to take a whole new approach to extensively scrutinize studies used in their reviews for mistakes, fraud or any reason they might be unreliable.
Likely the biggest scare of all is when a study or studies that are included in systematic reviews lead to clinical practices that directly cause harm and even death. The Economist cited a report that, based on a study published in 2009 that was later found to have included fabricated data, heart patients were treated with beta-blockers before surgery. These treatments were intended to reduce heart attacks and strokes, but the opposite happened. This practice went on for 10 years in Europe and likely caused multiple deaths. One estimate stated it may have caused 10,000 deaths a year in Britain alone.
The Economist article also reported a systematic review which showed that infusion of a high-dose sugar solution reduces mortality after head injury, was retracted after an investigation failed to find evidence that any of the trials, all by the same researcher, had really occurred!
Other recent spooky science incidents involve the investigation and subsequent departure of scientists in high level positions at major universities, due to plagiarism, data fraud and similar integrity problems. Most prominent of these is the case this year of Marc Tessier-Levigne, former President of Stanford University, and his research on Alzheimer’s disease. The NY Times headline read: Stanford President Resigns After Report Finds Flaws in his Research.
While Tessier-Levigne was ultimately exonerated of any data manipulation himself, it appears the data was maneuvered by a trainee in his lab, and Tessier-Levigne stepped down as President, accepting responsibility for not supervising the work more closely, as reported in Science Advisor.
Probably the most ironic recent story is that of Francesca Gino, a professor of psychology at Harvard University who has been credibly accused of data falsification in her research on dishonesty (I swear I am not making this up): Honesty Researcher Committed Research Misconduct, According to Newly Unsealed Harvard Report. She was subsequently accused of plagiarism in several of her writings: Embattled Harvard Honesty Professor Accused of Plagiarism. The data falsification was uncovered and reported by the Data Colada group.
Gino subsequently sued the group (and Harvard) for defamation, but recently a judge dismissed the suit against Data Colada while retaining a separate suit against Harvard based on gender discrimination.
Scientific journals and publishers are seeing more and more science fraud (or perhaps are doing a better job of searching for and catching it, with the help of technological advances) in paper submissions. According to an account in The Intelligencer this past May, for a long time these have been the product of paper mills, where hundreds of papers are written, sometimes using artificial intelligence, and where the studies are many times not even conducted, and “results” sold to researchers desperate for publications with which to pad their CV’s. These researchers then submit the papers to journals for consideration. Words, or even the topic of research, are sometimes changed from existing publications, and the data “re-used” to mimic a new study.
These papers can be “peer-reviewed” by reviewers placed with the Journal by the papermills! According to the Intelligencer, in May of this year the publisher Wiley decided to shut down 19 scientific journals after retracting 11,300 fake papers! (Really, I am not kidding.)
Retraction Watch and Data Colada are two organizations devoted to searching out and finding scientific research fraud. According to Ivan Oransky, founder of Retraction Watch, the real problem is not the paper mills, which have been known about for a while, but rather the way the scientific publishing system is set up and incentivized:
People are looking only at metrics, not at actual papers. We’re so fixated on metrics because they determine funding for a university based on where it is in the rankings. So, it comes from there and then it filters down. What do universities then want? Well, they want to attract people who are likely to publish papers. So how do you decide that? “Oh, you’ve already published some papers, great. We’re gonna bring you in.” And then when you’re there, you’ve got to publish even more. (Interview in the Intelligencer, May 2024.)
Data Colada, the group that discovered the dishonesty in the dishonesty researcher’s studies, focuses mostly on investigations of replicability in Social Science research, which can lead to discovery of data manipulation and tampering. Their website, now more than 10 years old, is filled with descriptions of cases of bad science – sometimes by mistake, but often by intention.
Another headline that is particularly spooky was seen in the British Medical Journal: Time to assume that health research is fraudulent until proven otherwise? In the piece, the author quotes Ben Mol, an OB/GYN professor who has been investigating clinical trials and has determined that 20% are fraudulent. He believes that many clinical trials are “zombie trials” – fatally flawed with untrustworthy results, such as those described by BMJ in this account:
Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing that mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and confirmed that they hadn’t ever happened.
While it has been historically difficult to impossible to get scientific publications retracted once fraud or mistakes are found, according to recent headlines this is changing. Earlier this year we we read: Biomedical Paper Retractions Have Quadrupled in 20 years – Why? While last year it was reported that More Than 10,000 Research Papers were Retracted in 2023 — a New Record.
This is good for the correction of bad science, but it is scary that so many papers need retraction. The reasons the numbers are so high are multiple, but at the top of the list is that more and more the scientific community and the public are aware and not happy with the cases of scientific fraud that are uncovered, especially by scientists dedicated to addressing this issue, such as Retraction Watch and Data Colada, and want it identified.
This is not a new problem. Misconduct accounts for the majority of retracted scientific publications, according to the National Academy of Sciences, from an investigation they published in 2012, looking at data going back to 1975:
A detailed review of all 2,047 biomedical and life-science research articles indexed by PubMed as retracted on May 3, 2012 revealed that only 21.3% of retractions were attributable to error. In contrast, 67.4% of retractions were attributable to misconduct, including fraud or suspected fraud (43.4%), duplicate publication (14.2%), and plagiarism (9.8%). Incomplete, uninformative or misleading retraction announcements have led to a previous underestimation of the role of fraud in the ongoing retraction epidemic. The percentage of scientific articles retracted because of fraud has increased ∼10-fold since 1975.
In summary, scientific misconduct, including data fabrication, plagiarism, and other questionable research practices, is a growing concern and the current publishing system’s emphasis on metrics over research quality contributes to this problem. However, increased awareness, the work of watchdog organizations, and a rise in retractions offer hope for addressing this issue and promoting integrity in scientific research.
Now that this “Spooky Science” is getting noticed and written about more broadly in both scientific and general media sources, it will be important to find solutions to this mess. Addressing the issue of scientific misconduct requires a shift away from emphasizing publication metrics and toward promoting robust scientific practices, transparency, and ethical conduct. Education in critical thinking, the scientific method, and ethics is crucial for ensuring the integrity of scientific research.