This year’s Nobel Prize in Economics was awarded to three researchers—David Card, Joshua Angrist, and Guido Imbens—who used “natural experiments” based on observations without randomization to draw conclusions about cause and effect relationships in labor markets and economic returns to education.
While not identical to the quest to better define the probability of causation for environmental chemicals, we found it encouraging that determining causation when randomized experiments are impossible motivated the economists’ prize-winning research. The Center has made advances in the area of probability of causation a current priority.
We spoke with Dr. Tony Cox, a board member of the Center and president of Cox Associates, about the work of these Laureates, why determining causation is necessary to make the best possible public policy decisions, and where scientists can go from here.
CTS: Most of society’s big questions cannot be answered using randomized controlled trials. So, how were people determining causality for purposes of policymaking before this Nobel Prize-winning work?
TC: Tragically, they often weren’t. The United States—among other places—has put decades of research and investment into establishing associations (e.g., between air pollution and mortality rates), and many regulators and policymakers simply treat evidence of association as if it were evidence of causation. But really it does not give us a single clue about the effects of interventions.
There’s been a lot of justified excitement recently over predictive analytics. But such prediction is passive—if I see one thing, such as elevated exposure, how confidently can I expect to see something else, such as elevated health effects?
This is a fundamentally different question from causal analytics, which deals with interventions. Causal analysis asks, how will outcomes change if we do things differently? Are we addressing a cause or a symptom? These are the kind of questions that smart regulators acting in the public interest need to ask.
What’s great about this Nobel Prize-winning work is that it addresses the right questions. I find it difficult to overemphasize the importance of asking the right questions, because we’ve spent so much money addressing the wrong questions.
CTS: Is determining causal relationships ever possible with a natural experiment, or is this framework chasing the closest approximation to certainty, given that there are always so many unmeasured variables?
TC: Sometimes yes, the constraints that the data imply are strong enough to uniquely determine causal relationships. Other times, you might end up with multiple competing models and need to find other data to determine which is correct. When experiments are not an option, you must either work with many alternative models and not resolve that uncertainty, or turn to principles such as invariant causal prediction and collect lots of data from many diverse situations, so you can see what’s going on.
You can draw causal inferences if you’re willing to make assumptions. Then the question is, are those conclusions driven by the assumptions or the data? If it is a mix of both, how much rests on those assumptions, and how well have those assumptions been tested?
The methods that have been employed in economics often make quite strong assumptions and often they are either untestable, or testable but left untested.
CTS: How can we minimize the limitations of relying on untested and untestable assumptions in the field of environmental health?
TC: There are several schools of causal analysis that rely to varying degrees on untested and untestable assumptions. One is counterfactual causality and modeling of potential outcomes, which typically depends heavily on assumptions. This is the school of thought embraced by the Nobel Prize winners.
Others favor the interventional school, which studies changes across several different settings in search of a causal rule that can be verified to hold across settings. This approach puts more emphasis on experience, seeing what happens in a variety of settings, rather than guessing using statistical models.
Even so, the interventional approach assumes that if a conditional probability relationship holds in many different settings, it’s because it is causal—and that it will continue to hold in other settings as well. It is always possible that won’t be so, but the risk gets smaller as the relationship keeps passing the test in additional different settings.
Both of these schools are superior to the “weight of evidence” approach that is widely used in regulatory risk assessment circles. This approach was developed by Sir Austin Bradford Hill in the 1960s to consider the factors that might lead to comfort in judgement of causation, among them: the strength and consistency of the association, biological plausibility of the association, and others. Hill suggested that when an association satisfies these factors, one might conclude it is causal. However, he did not define what he meant by “causal” or show that his intuitive considerations avoided erroneous causal conclusions.
This approach doesn’t pass the straight face test today in the discipline of causal analysis. Yet, it is used by an overwhelming majority of our regulatory agencies.
That is why establishing a discipline of causal analysis within the social sciences is so important. There are differences among the schools, but as a whole they ask the right questions and in the right way—using data, not opinions or intuitions.
CTS: Where can scientists go from here to keep improving on this work?
TC: The specific techniques used within the discipline of causal analysis can be made more robust by dropping reliance on untested assumptions. Professor Judea Pearl has an approach to causation that relies less on assumptions, especially untestable assumptions, than many of the techniques currently used in economics.
They could also embrace additional principles such as invariant causal prediction (ICP), the idea that a causal law has an element of universality about it—that a true causal law will apply across multiple settings, and under different conditions and interventions. This would allow us to get more science and less assumption into the causal analysis process.
What we’ve needed is a science of answering causal questions. And part of science is that other people independently can use the same data to address the same questions, and if they use the same methods, they should get the same answers.
In predictive analysis, we now have off-the-shelf solutions that allow different people to input the same data and get the same answer with the push of a button. We’re not quite there yet with causal analysis. However, using ICP would let us more tightly constrain possible truths about causal relations in the world.
CTS: The Royal Swedish Academy of Sciences stated, “We now have a coherent framework which, among other things, means that we know how the results of such studies should be interpreted.” What is the importance of having a single standard methodology?
TC: We need to give decision-makers—especially those with non-science backgrounds—a clear and objective way to address the relative strength of evidence surrounding causation. Right now, in the absence of a discipline, we have “experts” who testify that exposure X caused harm Y, even in circumstances where no quantitative analysis would draw that conclusion.
It is the Wild West right now for causal claims and expert judgments. We are using weight of evidence causal claims and judgments that are not held accountable for satisfying even essential logical and statistical consistency properties, such as that effects should not be conditionally independent of their direct causes, or much of anything else. We have a lot of opinions masquerading as science.
Establishing a formal discipline of causal analysis, regardless of which school it follows, is a big step toward getting people to stop and ask: What is the causal question being asked here? Can it be addressed with the available data? What are the limitations, and what can we reasonably conclude?
The idea that causality is not a matter of opinion, but in fact can be addressed rigorously and objectively from data, is a revolutionary concept indeed.