If you live in one of the states which has legalized recreational marijuana or a state that is considering it, you have possibly seen one of the following billboards:
The simple black background and white lettering makes them pop, but the statements themselves are even more captivating. The content covers contentious, hot-ticket topics: the opioid epidemic, health spending, and marijuana legalization. But what gets left out is the context: for most readers, these statements imply causality, despite there being limited evidence for these relationships to date.
To the casual observer, these are impressive, exciting statements. A beneficial effect of a historically outlawed and much maligned substance is indeed fascinating! A more cautious observer might be wondering about the source of these claims, and, indeed the fine print appears to contain references! The more cautious observer might now be appeased.
But really, we should all pause here, for two reasons:
Firstly, these billboards are advertising. I will not get into further discussion of advertising, political or otherwise, however I will note that “Weedmaps,” the billboard producers, is poised to be your go-to search engine and rating site for marijuana strains and producers.
Secondly, causality is complex and elusive. The two studies cited on these particular billboards (Bachhuber et al. 2014; Bradford & Bradford 2017) are ecological in design, meaning they have aggregated data (in this case, states) as the unit of analysis. The variables in these analyses are features of the states, including the main variable of interest, implementation of medical cannabis laws (note that medical is missing from both billboards). The research design is appropriate for questions about the average effects of medical cannabis laws on an outcome of interest (more on this later). But, these findings are subject to residual confounding on the state level. In addition, they are subject to the ecological fallacy in their interpretation, and as we all know, interpretation is what matters most.
Both studies include other state-level variables in the analyses that might explain the change in their outcomes over time, such as the implementation of state-wide Prescription Drug Monitoring Programs (PDMPs). PDMPs occurred in many states over the period studied, and based on similar analytic designs, may be largely responsible for improvements in opioid outcomes. Both studies account for PDMPs, and the first also considers several other opioid laws and policies that effectively restrict availability. These authors also performed several nice checks of robustness. For example, a secondary model was used to adjust for state-level linear time trends in outcome (i.e. including a random slope for state). Authors note this technique may account for changes in concepts that are difficult to measure, such as attitudes, and other time-varying confounders. The study employed analysis of negative controls: death rates from other conditions supposedly not associated with cannabis (e.g. heart disease and septicemia), which authors would expect to remain unaffected by legalization.
Despite these nice checks, it is unlikely that these analyses accounted for all potential confounding variables, especially those that change over time. And this is almost always the case, as it’s virtually impossible to observe, let alone control for, all sources of confounding. Adjusting for linear trends produced results that were only marginally statistically significant. Especially in dealing with states, the inclusion of a large number of explanatory variables quickly becomes a high-dimensional problem, where there are only 50 states with a few years of data, but many more variables than that. The question then becomes whether this residual confounding is enough to change our interpretation of these studies.
Interpretation of these studies (especially in the media) may suffer from the ecological fallacy, a logical fallacy when inference made on a group does not necessarily translate into inference on an individual’s behavior or risk. From these findings, we cannot make any inference about individuals’ patterns of opioids and cannabis use (i.e. substitution) and individuals’ underlying risk of negative opioid outcomes (i.e. substitution effects). In other words, we cannot link marijuana legality to the use patterns of individuals.
So where do we go from here?
The past decade has been something of an ecological study renaissance. This is not a bad thing. Such studies are useful for hypothesis generation, and population-level risk factors are very relevant in public health and medicine. Population-level risk factors may be important effect modifiers or cause exposure to individual risk factor. Differences in state laws can make for great “natural experiments” where groups of people are “randomized” to an exposure by a natural process, and a pre-post assessment can be made.
But mostly importantly, it comes down to inference. Inference from these studies might inform marijuana policy but should not inform interventions on individuals. Lots of discussion has been generated by these studies, and there is a great deal of room for misinterpretation (sample headline: “How marijuana is saving lives in Colorado”).
On the bright side, the scientific community recognizes this problem, and it is likely that additional studies of the individual- and population-level effects will be undertaken. A recent well-designed study from RAND (Powell et al. 2018) replicated Bachhuber et al., finding that adding more state-level variables and additional years of data to the model nullifies the effect of medical marijuana laws on opioid overdose mortality. Moreover, the authors identified that a more meaningful effect on opioid outcomes is achieved through protected and operational dispensaries, i.e. access, where the largest effect was seen during a time period of relatively lax regulation of dispensaries in California, Washington, and Colorado.
How to ensure that other new investigations will be high quality and unbiased is another question. Regardless, the tide for marijuana research appears to be turning. As more and more studies are published, it is imperative that researchers are clear about their analysis limitations, especially when their results might end up on a billboard.