There was recently a Twitter trend of people trying to describe programming in five words. The responses ranged from funny to puzzling to inspiring.
At our celebration of the end of the academic year, students, post-docs and faculty of the CHOICE Institute decided held a similar contest at our weekly seminar, and instructed attendees to “describe health economics and outcomes research (HEOR) in five words.” Here are a few of my favorite entries:
In recent years, the U.S. pharmaceutical world has been abuzz with the emergence of biosimilars—products that are very similar, but not identical, to a reference biologic product. To date, twelve biosimilars have received FDA approval under the Biologic Price Competition and Innovation Act (BCPIA), which was passed by Congress in 2010. However, among those biosimilars that have been licensed by the FDA, only two have been marketed. While many factors influence the time lag between FDA approval and biosimilar marketing, complex patent litigation may well contribute to delays in market launch. So, let’s explore the intricate information exchange surrounding FDA biosimilar application reviews and key litigation decisions in the biosimilar landscape.
What makes biosimilars different from generics?
First, it’s important to understand what makes biosimilars and their reference biologic products so unique. Unlike a generic for a small molecule drug, a biologic is manufactured in a living system—a complex process which is extremely challenging to exactly replicate and thus yields products that are similar to, but not exact copies of, a reference biologic. The process is also expensive. On average, R&D estimated costs for biosimilar range from $40 to $300 million and can take up to five years, whereas small molecule generics cost $2 to $5 million in R&D and can take up to three years.
What is the biosimilar “patent dance?”
With the BCPIA, Congress has made a strong effort to help improve affordability and accessibility of clinically powerful biologics. In a sense, they have sought to improve upon the Hatch-Waxman Act used for generics, by considering the unique issues that may arise as biologics reach the expiration of their 12-year patent life. Part of this intricate vetting structure is termed the “patent dance.” Indeed, the act specifies several steps to follow. 1.) After the FDA accepts an abbreviated Biologics License Application (aBLA), the BCPIA stipulates that the biosimilar maker “shall” provide its aBLA and manufacturing information to the reference biologic maker. 2) The reference biologic maker sends a list of patents that may be infringed by the biosimilar maker. 3) The biosimilar maker provides its responses. The steps continue until contentions are resolved.
Additional components of the dance are also in play. For example, the BCPIA indicates that biosimilar makers must provider 180 day notice prior to marketing their product. The guidance may have been intended to aid in resolving disputes before biosimilar market launch and assessment of damages (i.e. losses to either party from potential revenue of marketed products) complicated litigation. However, it has been criticized that if the biosimilar makers are only allowed to provide notice after FDA approval of their biosimilar, the provision essentially extends the reference biologic patent and delays market availability for a competitor. Several biosimilar makers are now providing their notice prior to FDA approval and the notice has been a component of law suits brought against biosimilar makers.
It may seem confusing that a patent dispute surrounding a single biosimilar product can become so complicated. But it’s important to consider how patents function with biologics. The patent for a new chemical entity is well-understood, but many other aspects surrounding the manufacturing process and product use can be patented—and ultimately disputed—for a biologic. The expansive landscape can lead to tens of patents surrounding a single product. AbbVie’s Humira®, for example, has recently received a great deal of attention for its protection with over 100 patents related to the product.
How have information disclosure and patent litigation for biosimilars played out so far?
Experience to date has revealed that while some biologic manufacturers follow patent dispute guidance, others seems to be setting new steps or circling around those laid out in the BCPIA altogether. In the Amgen v Sandoz case, which began in 2014 and was ultimately resolved in 2017, Sandoz refused to provide its aBLA for Zarxio®—a biosimilar to Amgen’s Neupogen®. Amgen then sued under both federal and California state law. The case ultimately landed in the Supreme Court and led to a key decision—compliance with the BCPIA’s information disclosure (i.e. the “patent dance”) cannot be enforced under both federal and state law. Instead, if a biosimilar maker does not follow the patent dance, a reference biologic maker can then sue the biosimilar maker for patent infringement. One of the more recent cases Amgen v. Adello again involves a biosimilar for Amgen’s Neupogen®. The suit by Amgen, submitted in March 2018, is essentially blind (i.e. does not specify all patent infringements by Adello) due to minimal information disclosure by Adello. In addition, Adello addresses another flex point of the BCPIA: the 180 day notice for marketing. Adello has provided this notice prior to the FDA’s approval of the biosimilar. Whether or not this marketing notice can only be provided before or after FDA approval remains a further point of contention in interpreting the Act.
What’s next in biosimilar patent litigation?
With the BCPIA in its nascent stages, we are prone to see its application become re-defined in years to come, just at Hatch-Waxman evolved in the generics market. Currently, biosimilar and reference biologic makers engage with the act’s provisions after careful consideration of how it will impact their products’ time on market and future products’ regulatory and marketing success. In the meantime, legislators are also assessing whether or not the structure of the BCPIA adequately provides a framework for achieving Congress’s goal of increasing biologic affordability and accessibility in the U.S.
Looking ahead, at least eight additional potential patent disputes are anticipated in 2018. Actions taken by private parties and stakeholders in the U.S. government will continue to define how the BCPIA is interpreted and applied in the important biologics space.
Over 58 million Americans are enrolled in Medicare. Of those, 20.2 million, approximately 34%, receive their Medicare through a private Medicare Advantage (MA) plan, rather than traditional fee-for-service (FFS) Medicare. That’s why the recent announcement by Centers for Medicare and Medicaid Services (CMS) that MA data will be available to researchers beginning this Fall is so exciting.
To date, research on MA has been limited because almost no data is available. CMS publishes aggregate MA enrollment by county. Researchers can also get some information on quality for MA plans through HEDIS and some have obtained small amounts of MA claims data. But detailed data have not been made available in a comprehensive or representative way.
This stands in contrast to data for FFS Medicare, which has long been available in a variety of formats. “Research identifiable files” (RIFs) are the most useful for researchers, providing patient-level enrollment and encounter data. Enrollment data contains patient-level information such as demographics, while encounter data contains utilization information like place of service, diagnoses, procedures, and prices. These files can also be linked to other datasets (such as NIH’s SEER (Surveillance, Epidemiology, and End Results), the national retirement survey, and the national death index) to create rich datasets.
The robust Medicare FFS data has supported a lot of research on FFS Medicare. But what’s happening in FFS Medicare doesn’t necessarily reflect what’s happening in MA, for a variety of reasons. One is that people who select into MA are different than people who select into FFS Medicare. MA plans often offer lower premiums than FFS Medicare, and some MA plans have enhanced benefits like dental and vision coverage and wellness programs like Silver Sneakers. MA enrollees also accept a managed care approach, which may include more limited or tiered provider networks and more restrictions on services through referral requirements.
Payment to MA plans also creates different incentives in the way MA plans and providers behave. Plans are paid a capitated per patient per month rate, plus additional payments for risk adjustment and quality. MA has ended up costing the government more than FFS Medicare, despite delivering care at a lower cost.
In some states, MA represents such a large share of the Medicare market that trends in FFS are only part of the picture. In Minnesota 56% of Medicare enrollees choose an MA plan, while over 40% do in both Florida and California. Research is particularly crucial in these states.
The MA data will help address these gaps and countless others related to our understanding of the Medicare market.
In late fall, MA RIF files will be available for 2015 encounters, which will cover six settings: inpatient, skilled nursing facility (SNF), home health, institutional outpatient, carrier, and durable medical equipment. Part D data (pharmacy) are available separately. The data are expected to be updated annually. The MA RIF data can be linked to all other CMS files using a beneficiary ID number, which means researchers can identify unique individuals across government insurance types. This allows researchers to investigate characteristics and drivers of Medicare switching behaviors (between FFS and MA), and even link enrollees who participate in other government programs such as Medicaid.
A plan characteristics file will also be available, allowing researchers to analyze or control for plan level factors. These include information on plan premiums, cost sharing tiers, service area, and special plan types like special needs plans (SNPs).
As with any claims data, there are limitations. Claims data exist to maintain records of reimbursement, so it does a good job at capturing variables that are important for paying bills, but is not as good for other data. Diagnosis data only includes diagnoses that were documented by a provider at the visit, so they are sometimes incomplete and not always sufficiently specific. Claims don’t provide physical measurements like blood pressure or BMI. They also don’t provide any information about people who haven’t visited a doctor, services that don’t bill the plan (such as vaccinations received at a supermarket pharmacy), or services not covered by Medicare. For beneficiaries without Part D coverage, pharmacy data are not available.
From a practical perspective, there are significant barriers to obtaining Medicare data, so it’s most practical for groups with long-term research plans in the Medicare space. In addition to a lengthy and involved application process and significant time to understand the files, the data are quite expensive. Price is based on the types and size of files requested, but in general researchers should expect a sample to cost several thousand dollars per year of data requested. Unlike many data sources, CMS does not offer lower fees for students or researchers.
Despite limitations, the MA data release is a big opportunity for researchers. If you’re interested in obtaining Medicare data, information can be found through ResDac, the CMS contractor that provides assistance with CMS data for academic, non-profit and government researchers.
On May 11th 2018, the Trump Administration released an outline of their planto address rising pharmaceutical prices in the U.S. The plan intends to increase competition, improve negotiations, change incentives, and decrease out-of-pocket costs. However, it has been criticized as being too moderate and ignores issues that experts have identified as key problems in the healthcare market.
Of concern is the dangerous pattern of increased federal spending with increased out-of-pocket costs for pharmaceuticals. As patients are burdened with higher drug costs, they are less likely to adhere to their medications, which can result in poor outcomes. According to Dr Watanabe:
“What we’re seeing with the medications that Medicare spends the most on is a troubling pattern of higher federal spend in constant dollars coupled with increased out-of-pocket spend by patients. Yet, fewer patients receiving the high-spend medications, because these drugs are often for less common conditions.”
Dr. Watanabe was on the committee of the National Academies of Sciences, Engineering, and Medicine that drafted the report Making Medicines Affordable: A National Imperative.One of the committee’s key recommendations to address the high cost of pharmaceuticals is that government agencies (e.g., Medicare) should be allowed to use their market power and negotiate lower drug prices. Other recommendations include reducing incentives to use costly pharmaceuticals, eliminating direct-to-consumer advertising, reforming health insurance plan structure, and re-evaluating discount programs (e.g., 340B) to ensure that participating facilities are meeting the program’s goal of helping vulnerable patients. Although the Trump Administration’s plan reflected some of the proposals from the National Academies document, they fall short of being firm resolutions. Dr. Watanabe stated that:
“The key elements required are transparencyand informed public dialogue. If we could shed light on the actual flow of the dollars and the practices used that absorb spending, then rational approaches can be taken to help patients better get the care they deserve and society to devise a sustainable system for delivering care by medications. It’s too hard to measure in the dark.”
Despite the criticisms, the Trump Administration’s announcement indicates that the nation is finally beginning to address the problem of unchecked increases in drug costs. The challenge will be to implement effective policies that continue to encourage innovation while addressing rising costs in a timely manner. However, Secretary of Health and Human Services, Alex M. Azar, cautioned that any dramatic change would take months if not years to implement.
As drug prices continue to increase, U.S. citizens have to continue shouldering the economic burden of an inefficient health care market. Health care policy makers agree that this is not sustainable, and that wide-scale reform is needed.Additionally, more nonpartisan discussion is needed to develop health care reforms that benefit the vast majority of U.S. citizens. Whether the Trump Administration’s plan is going to make an impact remains to be seen.
One of the most unique applications of Bayesian statistics is in finding estimates for unknown values that depend upon other unknown values. By taking advantage of the Bayesian ability to integrate prior knowledge into its models, you can develop parameter estimates using priors that are little more than a guess.
This application of Bayesian statistics is commonly seen in diagnostics. When there isn’t a gold standard test that allows simple comparisons, Bayesian models are able to use data on test results to estimate the performance of these tests and the prevalence of the disease. Whether it’s a new test or a new population where the test is unproven, these analyses allow us to glimpse important aspects of diagnostic usage with only scant data.
The pioneering paper that developed these methods is titled “Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard” by Lawrence Joseph, Theresa Gyorkos, and Louis Coupal. They collected the results of two tests for the Strongyloides parasite among Cambodian immigrants to Canada in the 70s. Since there was no knowledge of how common the parasite was in this group, they used an uninformative prior for its prevalence, but were able to solicit vague priors about the two tests’ performance from clinical experts. From these priors they built distributions which they then ran through a Gibbs sampler.
A Gibbs sampler is a program that runs repeated sampling to find the parameters – in our case, test performance and prevalence – that would make the most sense in light of the data we have. Because of the way that the sampler moves from parameter estimate to parameter estimate, it devotes most of its samples to high likelihood scenarios. Therefore, the parameter estimates are essentially histograms of the number of samples that the algorithm has run for each parameter value.
JAGS is a commonly used Gibbs sampler, and its name stands for “Just Another Gibbs Sampler.” It’s not the only one, but its got a convenient R interface and a lot of literature to support its use. I recently used JAGS in a tutorial on its R interface that recreates the Joseph, Gyorkos, and Coupal paper. You don’t need any datasets to run it, as you can easily simulate the inputs of the two Strongyloides tests.
The first part deals with gathering estimates from the two different parasite tests independently. This means building models of the test results as Bernoulli samples of a distribution that depends on the tests’ sensitivity and specificity, as well as the disease prevalence.
The second half of the tutorial deals with learning to use the data from the two tests together. This is significantly more complex, as we need to model the joint probability of each possible combination of the two tests together. To do this, we’ll need to read in the results of the tests on each patient. However, since we’re reading in the results directly, we can’t assign a distribution to them. Rather, we’ll learn to create a likelihood that is directly observed from the data and to ensure that our new likelihood affects the model.
To learn more and see the full details, go check out the tutorial on my GitHub page and feel free to ask me any questions that come to mind!
Editor’s note: This is the second in an ongoing series of interviews we’ve planned for the students graduating from the CHOICE Institute where we’ll get their thoughts on their grad school and dissertation experiences.
Solomon Lubinga is pharmacist and an applied health economist. After graduating with his PhD from the Comparative Health Outcomes, Policy, and Economics (CHOICE) Institute in 2017, he became a senior fellow at the CHOICE Institute at the University of Washington, working with Dr. Josh Carlson in collaboration with the Institute for Clinical and Economic Review (ICER). He is interested in decision modelling, value of information/implementation research as well as the econometric and health policy applications of discrete choice models.
decision theories to study the incentives that drive the uptake of medical male circumcision (MMC) for HIV prevention in Uganda. My hypothesis was that a model that would combine factors from both decision theories would not only more accurately predict MMC decisions, but also be a very powerful tool to predicting the potential impacts of different MMC demand creation strategies.
How did you arrive at that topic? When did you know that this is what you wanted to study?
I became interested in the intersection of economics and psychology early on in the PhD program. I suppose this was because of my own proclivity to act irrationally even though I considered myself a well-informed person. This led me to ask why individuals in lower income countries in general do not value preventive health interventions. This specific topic built on a prior contingent valuation study estimating willingness to pay (WTP) and willingness to accept payment (WTAP) for safe MMC among men in high-HIV-risk fishing communities in Uganda. The results of this analysis indicated low demand (WTP) and high potential incentive value (WTAP) for MMC, suggesting that a high WTAP (a defacto increase in MMC price) may result in an unfavorable incremental cost-effectiveness or benefit-to-cost ratio for MMC. I was therefore interested in studying the relative roles of economic and psychological incentives on demand for MMC.
What was your daily schedule like when you were working on your dissertation?
I never had a set schedule while I worked on my dissertation. I was also a teaching assistant (TA) for the online health economics course offered by CHOICE. I spent a lot of time in Uganda collecting my data. I would spend the day in the field (8:00am – 5:00pm) and the evenings (7:00pm – 11:00pm) performing my TA duties. It turned out that this was convenient (but challenging) because of the time difference between Uganda and the west coast. I also travelled to the UK twice for a choice modelling course, which was a great help with my dissertation. When I was in Seattle, I generally combined work on my dissertation with my teaching assistant responsibilities at the UW, with no set schedule. I simply gave what was more urgent the priority.
If you are willing to share, what was the timeline for your dissertation? And what determined that timeline?
I submitted my short proposal sometime in April, 2015. I defended my dissertation in August, 2017. Two major factors determined my timeline. First, the death of a close family member motivated me to take some personal time. Second, although I was fortunate to receive funding for my data collection activities, it took almost 8 months (between December 2015 and October 2016) to receive international clearance for the data collection activities.
How did you fund your dissertation?
As I mentioned, I was fortunate to receive funding for my data collection activities through a grant awarded to my dissertation advisor.
What will come next for you (or has come next for you)? What have you learned about finding work after school?
I am interested in academic positions in universities in the US, or other quasi-academic institutions (e.g., research institutes or global organizations that conduct academic-style research). As an international student, the main lesson I have learned is “to synchronize the completion of your studies with the job market cycle, especially if you are interested in academic positions in the US”.
If you live in one of the states which has legalized recreational marijuana or a state that is considering it, you have possibly seen one of the following billboards:
The simple black background and white lettering makes them pop, but the statements themselves are even more captivating. The content covers contentious, hot-ticket topics: the opioid epidemic, health spending, and marijuana legalization. But what gets left out is the context: for most readers, these statements imply causality, despite there being limited evidence for these relationships to date.
To the casual observer, these are impressive, exciting statements. A beneficial effect of a historically outlawed and much maligned substance is indeed fascinating! A more cautious observer might be wondering about the source of these claims, and, indeed the fine print appears to contain references! The more cautious observer might now be appeased.
But really, we should all pause here, for two reasons:
Firstly, these billboards are advertising. I will not get into further discussion of advertising, political or otherwise, however I will note that “Weedmaps,” the billboard producers, is poised to be your go-to search engine and rating site for marijuana strains and producers.
Secondly, causality is complex and elusive. The twostudies cited on these particular billboards (Bachhuber et al. 2014; Bradford & Bradford 2017) are ecological in design, meaning they have aggregated data (in this case, states) as the unit of analysis. The variables in these analyses are features of the states, including the main variable of interest, implementation of medical cannabis laws (note that medical is missing from both billboards). The research design is appropriate for questions about the average effects of medical cannabis laws on an outcome of interest (more on this later). But, these findings are subject to residual confounding on the state level. In addition, they are subject to the ecological fallacy in their interpretation, and as we all know, interpretation is what matters most.
Both studies include other state-level variables in the analyses that might explain the change in their outcomes over time, such as the implementation of state-wide Prescription Drug Monitoring Programs (PDMPs). PDMPs occurred in many states over the period studied, and based on similar analytic designs, may be largely responsible for improvements in opioid outcomes. Both studies account for PDMPs, and the first also considers several other opioid laws and policies that effectively restrict availability. These authors also performed several nice checks of robustness. For example, a secondary model was used to adjust for state-level linear time trends in outcome (i.e. including a random slope for state). Authors note this technique may account for changes in concepts that are difficult to measure, such as attitudes, and other time-varying confounders. The study employed analysis of negative controls: death rates from other conditions supposedly not associated with cannabis (e.g. heart disease and septicemia), which authors would expect to remain unaffected by legalization.
Despite these nice checks, it is unlikely that these analyses accounted for all potential confounding variables, especially those that change over time. And this is almost always the case, as it’s virtually impossible to observe, let alone control for, all sources of confounding. Adjusting for linear trends produced results that were only marginally statistically significant. Especially in dealing with states, the inclusion of a large number of explanatory variables quickly becomes a high-dimensional problem, where there are only 50 states with a few years of data, but many more variables than that. The question then becomes whether this residual confounding is enough to change our interpretation of these studies.
Interpretation of these studies (especially in the media) may suffer from the ecological fallacy, a logical fallacy when inference made on a group does not necessarily translate into inference on an individual’s behavior or risk. From these findings, we cannot make any inference about individuals’ patterns of opioids and cannabis use (i.e. substitution) and individuals’ underlying risk of negative opioid outcomes (i.e. substitution effects). In other words, we cannot link marijuana legality to the use patterns of individuals.
So where do we go from here?
The past decade has been something of an ecological study renaissance. This is not a bad thing. Such studies are useful for hypothesis generation, and population-level risk factors are very relevant in public health and medicine. Population-level risk factors may be important effect modifiers or cause exposure to individual risk factor. Differences in state laws can make for great “natural experiments” where groups of people are “randomized” to an exposure by a natural process, and a pre-post assessment can be made.
But mostly importantly, it comes down to inference. Inference from these studies might inform marijuana policy but should not inform interventions on individuals. Lots of discussion has been generated by these studies, and there is a great deal of room for misinterpretation (sample headline: “How marijuana is saving lives in Colorado”).
On the bright side, the scientific community recognizes this problem, and it is likely that additional studies of the individual- and population-level effects will be undertaken. A recent well-designed study from RAND (Powell et al. 2018) replicated Bachhuber et al., finding that adding more state-level variables and additional years of data to the model nullifies the effect of medical marijuana laws on opioid overdose mortality. Moreover, the authors identified that a more meaningful effect on opioid outcomes is achieved through protected and operational dispensaries, i.e. access, where the largest effect was seen during a time period of relatively lax regulation of dispensaries in California, Washington, and Colorado.
How to ensure that other new investigations will be high quality and unbiased is another question. Regardless, the tide for marijuana research appears to be turning. As more and more studies are published, it is imperative that researchers are clear about their analysis limitations, especially when their results might end up on a billboard.