Information overload is an ironic problem to have in a field that celebrates discovery. There are about 2.5 million scientific articles published each year in journals of science and medicine. It is a place of knowledge-sharing that is unsustainable for even the most voracious of readers.
For this reason, among others, we rely on peer-reviewed academic journals to curate the most significant findings across their respective fields. However, even this process has its challenges.
More and more (and more) journals
The rapidly growing number of publications makes the entire body of research on a given issue feel not quite as organized as it used to be. According to the James G. Martin Center for Academic Renewal:
“In 1800, only about thirty scientific and medical journals existed; by 1900, the number had grown to 700. Now, there are estimated to be more than twenty thousand.”
Professionals are scrambling to keep up with the output of all these journals, some turning to alternative curators such as social media, internet services, even forming groups of peers and colleagues to “divide and conquer” the hours and hours of reading.
Bias toward positive findings
Preferential treatment toward positive, headline-grabbing findings has created a “File Drawer Problem” that creates a publication disparity between studies that find links between variables and outcomes, and studies that find no association. This incentivizes researchers and editors to push for more studies finding positive results, increasing the number of “Type 1” errors.
The impacts of these false positive findings extend beyond the journals themselves. Incorrect claims find their way into courtroom rulings and regulations that impact the lives of many, and in some cases, unfairly shift the landscape of entire markets.
Inaccuracies and falsehoods still occur
There is a persistent problem of incorrect or false data in science and medicine that happens at times, even after peer review. One analysis of 526 trials submitted to Anaesthesia found 14% contained false data. A second investigation of the same journal found that significant percentages of individual patient data from trials were false, including 100% in studies coming from Egypt, 75% from Iran, 54% from India, and 46% from China. In total, these investigators suggest hundreds of thousands of fraudulent trials have been published.
As recently as August 2022, a shocking investigation conducted by Science found that apparently falsified images—which were subsequently published and cited by close to 2,300 journal articles—have misinformed the ways doctors and researchers have looked for treatments to fight Alzheimer’s disease for the past 20 years.
We cannot allow the urgency of information-sharing to supersede healthy scientific skepticism. Credibility and trust in science depend on putting research quality over research quantity.
What can we do?
The field is working to find innovative solutions to the publication of inaccurate science. These include such strategies as publishing formats that require pre-registration and peer review of research protocols before data collection begins.
At the same time, in order to assist non-scientist readers of the literature, the Center has compiled a list of Research Credibility Criteria that can be used to determine the quality or credibility of knowledge sources:
- Were key terms defined in the introduction? If substances were being studied, were they carefully described? How were reported effects determined?
- Did the study state how exposure or dosage was determined? (i.e., biological/chemical test, self-questionnaire, interview) How long ago did the exposure take place? What was the amount and duration of the exposure?
- If it was a human observational study, does the sample of individuals or groups properly represent the population being studied? Is the sample size large enough? If it was a case control study, are the cases and controls matched well? (i.e., age, sex, education, lifestyle, income)
- If it was an experimental study, were two or more groups compared? Were the groups similar in age, sex, education, income? Were individuals randomly assigned or chosen for a group where there was a difference in exposure?
- Did the researchers discuss the limitations of the study? Did they identify areas where additional research is needed? Did the authors propose what other factors may have influenced their findings—other than the exposure in question?
- Were the researchers transparent about where study funding came from? Did they disclose what, if any, influence the funders had in the study design, authorship of the paper, or other possible conflicts?
- If the study was conducted on animals, did the authors appropriately translate the doses administered to animals to determine how the exposure could affect humans?
- Whether human or animal, was the study conducted according to established rules of ethics? Did the study receive ethics or human subjects’ approval?
- Have the study results been replicated by other independent scientists? If so, have they been published in a reputable, peer-reviewed journal? The lack of replication should alert readers to the prematurity of study conclusions at this point in time.
- Did the authors make it possible for readers to have access to their data and methods in order to know exactly what was done in the study and how it was done?
Protecting scientific integrity
The consequences of inaccurate research extend beyond the credibility of journals. These same studies are sometimes cited in judicial decisions, regulations, standards of care, products, policies, and protocols at every level, from the U.S. Congress to local schools and hospitals.
The Center for Truth in Science is committed to protecting the integrity of scientific evidence and results. We support the ongoing improvements many professionals are trying to make in the publication process. Meanwhile, we are helping where we can by showing those in the public who want to stay abreast of scientific discoveries how to be good consumers of information.