The Alliance For Natural Health

“Correlation Does Not Imply Causation!”

0

It’s a famous meme, and one that is frequently on the FDA’s lips. But it doesn’t mean what a lot of people try to make it mean.

“Correlation does not imply causation” actually comes from the world of statistics. It means that finding a correlation—a connection between two events—is not in and of itself sufficient proof that one event caused the other. The FDA would say that just because anywhere between 730,000 to 14,600,000 Americans died in the past decade after being given FDA-approved drugs and vaccines or after procedures using FDA-approved medical devices, it doesn’t prove that those FDA-approved products were the proximate cause of death.

Edward Tufte, an American statistician and professor emeritus of political science, statistics, and computer science at Yale University, famously wrote, “Correlation is not causation, but it sure is a hint.”

Before you can prove causation between two factors, there must—by definition—have been a correlation between the two in the first place! Correlations need to be confirmed as real, of course, and then other possible causational relationships need to be systematically explored. In the end, however, correlation can be used as powerful evidence for a cause-and-effect relationship between a treatment and benefit, a risk factor and a disease, or a social or economic factor and various outcomes.

Logical Fallacies

If the FDA and others wish to dismiss any causal link between specific FDA-approved pharmaceuticals and the deaths of the patients who took them, we would point that there is also the opposite fallacy—dismissing correlation entirely, as if it does not suggest causation at all. As Steven Novella, MD, an academic clinical neurologist at the Yale University School of Medicine, shows, this would dismiss a large swath of important scientific evidence:

For example, the tobacco industry abused this fallacy to argue that simply because smoking correlates with lung cancer that does not mean that smoking causes lung cancer. The simple correlation is not enough to arrive at a conclusion of causation, but multiple correlations all triangulating on the conclusion that smoking causes lung cancer, combined with biological plausibility, does.

Correlation must always be put into perspective. There are two basic kinds of clinical scientific studies that may provide evidence of correlation—observational and experimental. Experimental studies are ones in which some intervention is given to a study population. In experimental studies it is possible to control for many variables, and even reasonably isolate the variable of interest, and so correlation in a well-designed experimental study is very powerful, and we generally can assume cause and effect. If active treatment vs. placebo correlates with a better outcome, then we interpret that as the treatment causing the improved outcome.

Daniel Engber puts it this way:

To say that correlation does not imply causation makes an important point about the limits of statistics, but there are other limits, too, and ones that scientists ignore with far more frequency. In The Cult of Statistical Significance, the economists Deirdre McCloskey and Stephen Ziliak cite one of these and make an impassioned, book-length argument against the arbitrary cutoff that decides which experimental findings count and which ones don’t. By convention, we call an effect “significant” if the chances of its deriving from a twist of fate—as opposed to some more genuine relationship—are less than 5 percent. But as McCloskey and Ziliak (and many others) point out, there’s nothing special about that number and no reason to invest it with our faith.

As renowned British scientist and statistician Karl Pearson wrote his seminal 1892 work, The Grammar of Science (one of Einstein’s favorite books), “The higher the correlation, the more certainly we can predict from one member what the value of the associated member will be. This is the transition of correlation into causation.”

Adverse Event Reports

Some adverse events have nothing whatsoever to do with the drug but merely happened at the same time and may have had another cause. An adverse event report (AER) does not necessarily mean that the FDA approved a dangerous drug—it could be that a doctor incorrectly prescribed it, or someone took too much on purpose or by mistake. However, thousands—or hundreds of thousands—of hospitalizations after taking the same drug really does imply causation: an overwhelming amount of data all saying the same thing is more indicative of cause than small numbers might be. Even if half the AERs are attributable to user error, the numbers of AERs from the drugs themselves will still be overwhelmingly high.

Gary Null et al. point out that drugs generally are tested on individuals who are fairly healthy and not on other medications that could interfere with findings. But when these new drugs are declared “safe” and enter the drug prescription books, they are naturally going to be used by people who are on a variety of other medications and have a lot of other health problems. Then a new phase of drug testing called “post-approval” comes into play, which is the documentation of side effects once drugs hit the market.

In 1990, the federal government’s General Accounting Office (now General Accountability Office) found that of the 198 drugs approved by the FDA between 1976 and 1985, 102 (or 51.5%) had serious post-approval risks. The serious post-approval risks included heart failure, myocardial infarction, anaphylaxis, respiratory depression and arrest, seizures, kidney and liver failure, severe blood disorders, birth defects and fetal toxicity, and blindness.

Correlation may not prove causation, but it certainly indicates a most grievous problem in the FDA approval process.

Share.