In its May 9, 2024, issue the Journal of the American Medical Association proposes a framework for using causal language when reporting the results of observational studies (Dahabreh, et al). The proposed framework emphasizes the necessity of using subjective judgement to assess the validity of empirically untestable causal modeling assumptions in deciding whether results of observational data analyses should be interpreted causally. This subjective approach is a major change of policy for the Journal, which, until now, only allowed the use of causal terminology when publishing articles with results of randomized clinical trials.
What is missing from JAMA’s framework is a good understanding of the advances that have been made in causal analysis based on observational data that do not rely on subjective, untestable interpretation by scientific/medical experts (eg. answers to six questions JAMA proposes in their framework), but rather use recent objective, empirically testable methods that have been made possible by progress in methods of causal analysis for observational data.
The Center is pleased to announce the publication of a new paper in Critical Reviews in Toxicology by Professor Tony Cox, that provides a thorough explanation of an “objective approach to causal analysis of exposure-response relationships in observational data” that is “independently verifiable (or refutable) and data driven, requiring no inherently untestable assumptions.” Relying on causal models (causal Bayesian networks), it allows causation to be empirically verified through testable properties such as Invariant Causal Prediction tests across multiple studies.
The paper provides a thorough explanation of how to use causal Bayesian networks and individual conditional expectation (ICE) plots to quantify the changes in health effects caused by real or potential changes in exposures (both good and bad). It addresses real life complications such as imperfectly controlled confounding, missing data, and measurement error. This results in a framework that is reliable, transparent, and independently testable and verifiable by anyone with access to the data.
- The article’s Introduction discusses the very real limitations of the subjective approaches to causality, and how objective approaches can and do overcome these. It proposes an approach that can lead to more practical causal modeling that would better protect individual health and health policy decisions.
- The Methods section explains how to do Interventional Causation; develop fully Specified Directed Acyclic Graph (DAG) Models and Bayesian Networks (BNs); define and test the property of Invariant Causal Predictions (ICP), which tests whether the same causal models apply across a variety of settings and interventions; details of how to apply Bayesian Network Inference Algorithms; and Causal Discovery Algorithms.
- The third section, Testing Interventional Causal BN Model Assumptions and Predictions, demonstrates how Fully Specified Causal Bayesian Network (CBN) Models make inherently testable assumptions and allow testing of internal and external validity.
- The fourth section, Defining Objective Interventional Causal Effects,discusses how to define and quantify natural direct interventional causal effects in a fully specified causal Bayesian Network (CBN) model.It answers the question, in what sense is this approach to causality “objective”?
- This section explains the Practical Calculation and Visualization of Objective Interventional Causal Effects: ICE Plots, or Individual Conditional Expectation (ICE) Plots, and gives an example of how they can be used to predict how changes in annual health insurance charges depend on changes in modifiable factors such as smoking and exercise.
- The next section discusses Dealing with Residual and Hidden Confounding, Missing Data, and Measurement Error, including testing for incompletely controlled residual confounding; testing for uncontrolled confounding by unmeasured confounders; using Bayesian Network Inference Algorithms for missing data and measurement error; and a comparison to other approaches in objective causal analysis.
- This paper gives examples of Practical Applications and Implications of the Objective Causation Framework with Partial Knowledge and Information, with one example focusing on Benzene Exposure and Risk of Acute Myeloid Leukemia, and another on Fine Particulate Matter (PM2.5) and Mortality, including
· Associational Studies of PM2.5 and Mortality
· Potential Outcomes Models: The New Jersey PM2.5-Mortality Study (Wang et al., 2016) as an Example
· Accountability Studies of PM2.5 and Mortality: The Dublin Study as an ExampleThe section ends with a discussion of applying the lessons including external validity and the difference that an objective approach makes. - The article’s Conclusion discusses the empirical validation of causal models including the importance of replacing subjective judgments with empirically testable models and predictions.
The full article can be found here. The Center for Truth in Science has provided the funding for open access for this publication, as we believe it is important to move fields, such as health care, health policy and prevention, and risk analysis, that need to rely on observational data, to a more objective approach.