Every day, consumers are bombarded with an endless ticker tape of news stories, marketing campaigns, and social media posts that allege newly discovered benefits and risks, according to the latest “scientific studies.” The more sensational the discovery, the more attention it receives.

Public trust in science—and sound policy decisions on public health—require tools to separate scientific fact from fiction. And the best way to determine whether the findings of a particular study are reasonable is by looking at the quality of the study design and scientific evidence. Below are some of the most common red flags of bad science.

1. Failure to define key terms

  • Do you understand exactly what is being studied?

A study should clearly identify what variables were studied and in what amounts, along with how they defined and determined any “health harms” or “health benefits.” Studies involving exposure to—or a dosage of—a variable should explain how the exposure was determined (i.e., biological/chemical test, self-report, interview), how long ago it happened, and the amount and duration of the exposure.

Clarity is key. The weaker the details, the weaker the study.

2. Poor study design

  • What is the sample size? How long were they studied for?
  • Are the subjects representative of the population being studied?
  • Did the researchers control for other variables that could explain the results?
  • If it was an experimental study, were two or more groups compared? Were participants  randomly assigned a group where there was a difference in exposure?

In general, the smaller the sample size, the weaker the study. The shorter the duration, the weaker the study results. The sample should control for as many potential confounding variables as possible, such as location, age, sex, education level, lifestyle, income, and more. All of these factors, if not accounted for, could also explain the study results—and therefore weaken the research.

When trying to find a link between two variables, the gold standard is a randomized clinical trial in which participants are randomly assigned to their study groups. If possible, the participants should not know if they are in the control or experimental group. This is known as a “blind study,” and it protects the study from bias that may influence the results.

When a clinical trial is not possible (it might be impractical or unsafe), researchers may instead observe participants without assigning them to groups and controlling their exposure to a variable to see what occurs spontaneously. While these “natural experiments” and observational studies often provide valuable insights, they rarely establish causal relationships—there are often simply too many assumptions and confounding variables to be sure that one thing is the cause of the other.

3. Lack of replication and review

  • Has anybody replicated the study? Did they produce the same findings?
  • Were they published in a reputable, peer-reviewed scientific journal?

When researchers independently apply the same data and methods to address the same questions, they should get similar answers. If they do not, the methods and findings of the original study should be rigorously questioned.

The peer review process holds researchers accountable to the highest standards of their field by putting their work under the scrutiny of other experts. If a study has not been peer reviewed, its findings should be viewed with caution until further notice.

4. Lack of transparency

  • Did the researchers share all of their methods and data?
  • Did they disclose where study funding came from?

Having a study funded by a person or group with a stake in the outcome is suspicious. Did they disclose what, if any, influence the funders had in the design, authorship, or other possible conflicts?

Regardless of who funded the study, researchers should release their data in full so it can be reviewed and replicated to ensure its validity—and its existence. Faulty or fabricated data and incorrect methods are more common than we like to believe.

Transparency also prevents researchers from cherry-picking data to support a preferred hypothesis, and unintentionally letting their own bias influence their findings.

5. Exaggerated and editorialized results

  • Are the results statistically significant? Or could they be due to random chance or something else entirely?
  • Does the study confuse association with causation?
  • Do the researchers advocate for policy changes in their conclusions?

Disciplined scientists examine evidence and share the study findings—they do not give instructions on what should be done with the findings, particularly from a policy or clinical standpoint. If the study concludes with such an opinion, or a call to action, it is a major red flag.

Instead, researchers with scientific integrity discuss the limitations of the study, identify areas where more research is needed, and propose what else may have influenced the study outcomes with regard to the variables that were examined.

As we engage with thousands of scientific research papers published each year, we should remember that science is a living process. Technology improves, methods evolve, and evidence grows, making our decisions only as good as the information we have and the questions we ask.