When I joined the Center for Truth in Science, certain elements of our programs came with a bit of a learning curve. While the early years of my career were spent in the lab, the past decade of my professional life was spent in a mix of roles across business and nonprofit sectors.
The scientific arena has its own language, and I needed to brush up on it.
I began with the vocabulary surrounding one of the Center’s largest initiatives: our grant-funded independent research. We award funding to top researchers to perform critical and systematic reviews on topics at the intersection of science, justice, and the economy.
As I looked over the conclusions of these reviews, a few repeat phrases kept showing up: limited evidence of no association, suggestive evidence of no association, insufficient evidence, etc.
Immediately I had questions. Where did those phrases come from? What do they mean? What’s the difference between limited evidence and insufficient evidence?
Since most people do not come from a scientific research background, I suspect that others may have the same questions. Below is my best attempt to answer them.
What is the purpose of a systematic review?
Critical and systematic reviews exist to examine the quality of primary research on a given topic and to help synthesize multiple studies into one central conclusion, helping us keep up with the best available scientific evidence.
When the Center funds such reviews, we remain completely hands-off. We don’t interfere with the work of the scientists or the choice of papers to include in a review, and are not involved in writing the papers or choosing where to submit them for peer-review and publication.
Of the Center-funded reviews on PFAS, talc, ethylene oxide, and glyphosate to date, nearly all of the findings have been published in peer-reviewed scientific journals. Some of them have been cited by researchers in other journals, including Foods, and Frontiers in Public Health.
Why do I keep seeing the same terms used in the conclusions of reviews?
By definition, systematic reviews must be methodical, and, well, systematic. This includes having a standardized way to synthesize the evidence into one central conclusion. Doing so allows us to compare the strength of results relative to each other across studies.
It also includes an open, transparent, and ‘a priori’ method for rating the quality of the papers included in the review. Researchers refer to these types of standard frameworks as ‘evidence hierarchies.’
There are a few different types of evidence hierarchies, including the GRADE guidelines and the Institute of Medicine (IOM) of the National Academies of Sciences, Engineering, and Medicine, which we have summarized in a previous Center blog post. However, it is the IOM framework that has been most relevant to Center-funded reviews to date.
What is the IOM framework?
The IOM framework is widely used in the scientific arena, especially with toxicology reviews. It was originally developed by an IOM committee (formed at the request of Congress) to classify the legitimacy and severity of links between Agent Orange exposure and health issues experienced by Vietnam War veterans.
The committee created an evidence hierarchy consisting of four categories:
1. Sufficient evidence of an association
This means that “a positive association between exposure… and the outcome must be observed in studies in which chance, bias, and confounding can be ruled out with reasonable confidence … Experimental data supporting biologic plausibility strengthen the evidence of an association but are not a prerequisite and are not enough to establish an association without corresponding epidemiologic findings.”
2. Limited or suggestive evidence of an association
This is the case when the evidence suggests “an association between exposure and the outcome in studies of humans, but the evidence can be limited by an inability to confidently rule out chance, bias, or confounding. Typically, at least one high-quality study indicates a positive association, but the results of other studies could be inconsistent.”
3. Inadequate or insufficient evidence to determine an association
This is used when the “available human studies may have inconsistent findings or be of insufficient quality, validity, consistency, or statistical power to support a conclusion regarding the presence of an association. Such studies might have failed to control for confounding factors or might have had inadequate assessment of exposure.”
4. Limited or suggestive evidence of no association
This level is chosen when “several adequate studies covering the ‘full range of human exposure’ are consistent in showing no association with exposure to [the observed substance] at any concentration and [have] relatively narrow confidence intervals.”
“A conclusion of ‘no association’ is inevitably limited to the conditions, exposures, and observation periods covered by the available studies, and the possibility of a small increase in risk related to the magnitude of exposure studied can never be excluded.”
What are the characteristics that matter most when assessing evidence quality?
The IOM outlined some key characteristics that researchers should focus on when assessing the quality of study evidence in its book, Finding What Works in Health Care: Standards for Systematic Reviews. You can access a free copy of the book here.
It outlines eight basic characteristics of quality that should be used to evaluate evidence: risk of bias, consistency, precision, directness, reporting bias, and — for observational studies — strength of association, dose-response association, and plausible confounding that would change an observed effect.
Understanding the language of science keeps us all on the same page. The Center is a neutral translator for decision makers across the policy, regulatory, and legal arenas in order to help everyone make educated decisions with the strongest possible scientific evidence and clarity around what the evidence actually says. We could all use a refresher course from time to time.