Given the sheer number of scientific papers published each year, it can be challenging for regulators, policymakers, clinicians—and especially the public—to keep up with the evidence on how various exposures, behaviors, and interventions may impact our health and our lives.

Back in the 1970s, the scientific community began formalizing two strategies to determine the strength of evidence emerging from individual scientific studies. These strategies are called systematic reviews and meta-analyses (a subset of systematic reviews).

While systematic review articles had been published before this point, they were not subject to rigorous protocols, such as:

  • A priori research questions
  • Established methodologies for article selection
  • Systematic determination of article quality and forms of bias

These early reviews were often affected by significant partiality, making them untrustworthy.

From the 1980s onward, systematic reviews (labeled as such) have been published in scientific literature and designated at the highest level of evidence for use in evidence-based medicine by a large variety of organizations that have published evidence hierarchies.

Even so, critics believe that problems remain when it comes to determining strength of evidence. One of these critics is John Ioannidis of Stanford University, who has cautioned, “Possibly, the large majority of produced systematic reviews and meta-analyses are unnecessary, misleading, and/or conflicted.” He is also a member of one of the Committees that established criteria for high-level systematic reviews.

A primary issue has been the lack of a standard definition of “systematic review,” allowing for the term to be used—and misused—in the titles of many published review articles. In this three-part series, I will explain the elements of a strong systematic review, how to interpret strength of the evidence reported, and how its findings can be used in policy, regulation, and clinical practice.

What is a systematic review?

The Cochrane Collaboration defines “systematic review” as a:

“Review of the evidence on a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant primary research, and to extract and analyse data from the studies that are included in the review.”

In 2011, the Institute of Medicine (IOM) of the National Academies of Sciences also issued a report titled “Finding What Works in Healthcare, Standards for Systematic Reviews.” The reportlists detailed criteria for initiating a systematic review, finding and assessing studies to be included, synthesizing the body of evidence, and reporting the results.

These criteria have been applied to systematic reviews in clinical medicine and more broadly to research on environmental exposures and other areas of human health.

What makes a good systematic review?

The following questions can help determine if a systematic review meets the IOM standards[1]:

  • Was the review question clearly defined in terms of population, interventions, comparators, outcomes and study designs (PICOS)?
  • Was the search strategy adequate and appropriate? Were there any restrictions on language, publication status or publication date?
  • Were preventative steps taken to minimize bias and errors in the study selection process?
  • Were appropriate criteria used to assess the quality of the primary studies, and were preventative steps taken to minimize bias and errors in the quality assessment process?
  • Were preventative steps taken to minimize bias and errors in the data extraction process?
  • Were adequate details presented for each of the primary studies?
  • Were appropriate methods used for data synthesis? Were differences between studies assessed? Were the studies pooled, and if so was it appropriate and meaningful to do so?
  • Do the authors’ conclusions accurately reflect the evidence that was reviewed?

It should also be noted which guidelines (if any) were followed in reporting the results of the review. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) is an evidenced-based minimum set of standards for reporting results of systematic reviews and meta-analyses.

In my next post of the series, I will discuss how to interpret the strength of the evidence reported in a well-done systematic review.

[1] Systematic Reviews: ISBN 978-1-900640-47-3