Biosimilar Litigation in the United States: Redefining the Patent Dance

By Simi Grewal, MHS, PhD Student

In recent years, the U.S. pharmaceutical world has been abuzz with the emergence of biosimilars—products that are very similar, but not identical, to a reference biologic product. To date, twelve biosimilars have received FDA approval under the Biologic Price Competition and Innovation Act (BCPIA), which was passed by Congress in 2010. However, among those biosimilars that have been licensed by the FDA, only two have been marketed. While many factors influence the time lag between FDA approval and biosimilar marketing, complex patent litigation may well contribute to delays in market launch. So, let’s explore the intricate information exchange surrounding FDA biosimilar application reviews and key litigation decisions in the biosimilar landscape.

biologics
Image credit: Promega

What makes biosimilars different from generics?

First, it’s important to understand what makes biosimilars and their reference biologic products so unique. Unlike a generic for a small molecule drug, a biologic is manufactured in a living system—a complex process which is extremely challenging to exactly replicate and thus yields products that are similar to, but not exact copies of, a reference biologic. The process is also expensive. On average, R&D estimated costs for biosimilar range from $40 to $300 million and can take up to five years, whereas small molecule generics cost $2 to $5 million in R&D and can take up to three years.

What is the biosimilar “patent dance?”

With the BCPIA, Congress has made a strong effort to help improve affordability and accessibility of clinically powerful biologics. In a sense, they have sought to improve upon the Hatch-Waxman Act used for generics, by considering the unique issues that may arise as biologics reach the expiration of their 12-year patent life. Part of this intricate vetting structure is termed the “patent dance.” Indeed, the act specifies several steps to follow. 1.) After the FDA accepts an abbreviated Biologics License Application (aBLA), the BCPIA stipulates that the biosimilar maker “shall” provide its aBLA and manufacturing information to the reference biologic maker. 2) The reference biologic maker sends a list of patents that may be infringed by the biosimilar maker. 3) The biosimilar maker provides its responses. The steps continue until contentions are resolved.

Additional components of the dance are also in play. For example, the BCPIA indicates that biosimilar makers must provider 180 day notice prior to marketing their product. The guidance may have been intended to aid in resolving disputes before biosimilar market launch and assessment of damages (i.e. losses to either party from potential revenue of marketed products) complicated litigation. However, it has been criticized that if the biosimilar makers are only allowed to provide notice after FDA approval of their biosimilar, the provision essentially extends the reference biologic patent and delays market availability for a competitor. Several biosimilar makers are now providing their notice prior to FDA approval and the notice has been a component of law suits brought against biosimilar makers.

It may seem confusing that a patent dispute surrounding a single biosimilar product can become so complicated. But it’s important to consider how patents function with biologics. The patent for a new chemical entity is well-understood, but many other aspects surrounding the manufacturing process and product use can be patented—and ultimately disputed—for a biologic. The expansive landscape can lead to tens of patents surrounding a single product. AbbVie’s Humira®, for example, has recently received a great deal of attention for its protection with over 100 patents related to the product.

How have information disclosure and patent litigation for biosimilars played out so far?

Experience to date has revealed that while some biologic manufacturers follow patent dispute guidance, others seems to be setting new steps or circling around those laid out in the BCPIA altogether. In the Amgen v Sandoz case, which began in 2014 and was ultimately resolved in 2017, Sandoz refused to provide its aBLA for Zarxio®—a biosimilar to Amgen’s Neupogen®. Amgen then sued under both federal and California state law. The case ultimately landed in the Supreme Court and led to a key decision—compliance with the BCPIA’s information disclosure (i.e. the “patent dance”) cannot be enforced under both federal and state law. Instead, if a biosimilar maker does not follow the patent dance, a reference biologic maker can then sue the biosimilar maker for patent infringement. One of the more recent cases Amgen v. Adello again involves a biosimilar for Amgen’s Neupogen®. The suit by Amgen, submitted in March 2018, is essentially blind (i.e. does not specify all patent infringements by Adello) due to minimal information disclosure by Adello. In addition, Adello addresses another flex point of the BCPIA: the 180 day notice for marketing. Adello has provided this notice prior to the FDA’s approval of the biosimilar. Whether or not this marketing notice can only be provided before or after FDA approval remains a further point of contention in interpreting the Act.

What’s next in biosimilar patent litigation?

With the BCPIA in its nascent stages, we are prone to see its application become re-defined in years to come, just at Hatch-Waxman evolved in the generics market. Currently, biosimilar and reference biologic makers engage with the act’s provisions after careful consideration of how it will impact their products’ time on market and future products’ regulatory and marketing success. In the meantime, legislators are also assessing whether or not the structure of the BCPIA adequately provides a framework for achieving Congress’s goal of increasing biologic affordability and accessibility in the U.S.

Looking ahead, at least eight additional potential patent disputes are anticipated in 2018. Actions taken by private parties and stakeholders in the U.S. government will continue to define how the BCPIA is interpreted and applied in the important biologics space.

The Release of Medicare Advantage Data: What Does It Mean for Researchers?

Over 58 million Americans are enrolled in Medicare. Of those, 20.2 million, approximately 34%, receive their Medicare through a private Medicare Advantage (MA) plan, rather than traditional fee-for-service (FFS) Medicare. That’s why the recent announcement by Centers for Medicare and Medicaid Services (CMS) that MA data will be available to researchers beginning this Fall is so exciting.

To date, research on MA has been limited because almost no data is available. CMS publishes aggregate MA enrollment by county. Researchers can also get some information on quality for MA plans through HEDIS and some have obtained small amounts of MA claims data. But detailed data have not been made available in a comprehensive or representative way.

image
Image from Kaiser Family Foundation

 

This stands in contrast to data for FFS Medicare, which has long been available in a variety of formats. “Research identifiable files” (RIFs) are the most useful for researchers, providing patient-level enrollment and encounter data. Enrollment data contains patient-level information such as demographics, while encounter data contains utilization information like place of service, diagnoses, procedures, and prices. These files can also be linked to other datasets (such as NIH’s SEER (Surveillance, Epidemiology, and End Results), the national retirement survey, and the national death index) to create rich datasets.

The robust Medicare FFS data has supported a lot of research on FFS Medicare. But what’s happening in FFS Medicare doesn’t necessarily reflect what’s happening in MA, for a variety of reasons. One is that people who select into MA are different than people who select into FFS Medicare. MA plans often offer lower premiums than FFS Medicare, and some MA plans have enhanced benefits like dental and vision coverage and wellness programs like Silver Sneakers. MA enrollees also accept a managed care approach, which may include more limited or tiered provider networks and more restrictions on services through referral requirements.

Payment to MA plans also creates different incentives in the way MA plans and providers behave. Plans are paid a capitated per patient per month rate, plus additional payments for risk adjustment and quality. MA has ended up costing the government more than FFS Medicare, despite delivering care at a lower cost.

In some states, MA represents such a large share of the Medicare market that trends in FFS are only part of the picture. In Minnesota 56% of Medicare enrollees choose an MA plan, while over 40% do in both Florida and California. Research is particularly crucial in these states.

The MA data will help address these gaps and countless others related to our understanding of the Medicare market.

In late fall, MA RIF files will be available for 2015 encounters, which will cover six settings: inpatient, skilled nursing facility (SNF), home health, institutional outpatient, carrier, and durable medical equipment. Part D data (pharmacy) are available separately. The data are expected to be updated annually. The MA RIF data can be linked to all other CMS files using a beneficiary ID number, which means researchers can identify unique individuals across government insurance types. This allows researchers to investigate characteristics and drivers of Medicare switching behaviors (between FFS and MA), and even link enrollees who participate in other government programs such as Medicaid.

A plan characteristics file will also be available, allowing researchers to analyze or control for plan level factors. These include information on plan premiums, cost sharing tiers, service area, and special plan types like special needs plans (SNPs).

As with any claims data, there are limitations. Claims data exist to maintain records of reimbursement, so it does a good job at capturing variables that are important for paying bills, but is not as good for other data. Diagnosis data only includes diagnoses that were documented by a provider at the visit, so they are sometimes incomplete and not always sufficiently specific. Claims don’t provide physical measurements like blood pressure or BMI. They also don’t provide any information about people who haven’t visited a doctor, services that don’t bill the plan (such as vaccinations received at a supermarket pharmacy), or services not covered by Medicare. For beneficiaries without Part D coverage, pharmacy data are not available.

From a practical perspective, there are significant barriers to obtaining Medicare data, so it’s most practical for groups with long-term research plans in the Medicare space. In addition to a lengthy and involved application process and significant time to understand the files, the data are quite expensive. Price is based on the types and size of files requested, but in general researchers should expect a sample to cost several thousand dollars per year of data requested. Unlike many data sources, CMS does not offer lower fees for students or researchers.

Despite limitations, the MA data release is a big opportunity for researchers. If you’re interested in obtaining Medicare data, information can be found through ResDac, the CMS contractor that provides assistance with CMS data for academic, non-profit and government researchers.

Trump Administration’s Blueprint to Address Drug Prices

 

On May 11th 2018, the Trump Administration released an outline of their plan to address rising pharmaceutical prices in the U.S. The plan intends to increase competition, improve negotiations, change incentives, and decrease out-of-pocket costs. However, it has been criticized as being too moderate and ignores issues that experts have identified as key problems in the healthcare market.

Dr. Jonathan H. Watanabe, an alumnus of The CHOICE Institute and associate professor at UC San Diego Skaggs School of Pharmacy and Pharmaceutical Sciences, was recently interviewed by Mari Payton of NBC7 in San Diego, CA about the Trump Administration proposed plans. Dr. Watanabe stated that the U.S. can’t sustain the current prices on healthcare and argued that we need to have the ability to negotiate more reasonable prices from the pharmaceutical industry. Currently, Medicare does not have the power to negotiate lower prices with drug makers like other countries and private healthcare plans do.

Thus, the absence of Medicare’s ability to negotiate for lower prices highlights an important limitation of the agency. Other federal agencies such as the Department of Veterans Affairs are able to directly negotiate for lower drug prices; however, Medicare is unable to do so. The Trump Administration’s plan doesn’t address price control or allow Medicare to leverage their market power to negotiate prices directly with pharmaceutical manufacturers, a key problem in today’s healthcare industry. Instead, the plan intends to reform Medicare Part D to allow the plan sponsors, instead of Medicare, to negotiate lower priceswith drug makers.

Of concern is the dangerous pattern of increased federal spending with increased out-of-pocket costs for pharmaceuticals. As patients are burdened with higher drug costs, they are less likely to adhere to their medications, which can result in poor outcomes. According to Dr Watanabe:

What we’re seeing with the medications that Medicare spends the most on is a troubling pattern of higher federal spend in constant dollars coupled with increased out-of-pocket spend by patients. Yet, fewer patients receiving the high-spend medications, because these drugs are often for less common conditions.”

Dr. Watanabe was on the committee of the National Academies of Sciences, Engineering, and Medicine that drafted the report Making Medicines Affordable: A National Imperative. One of the committee’s key recommendations to address the high cost of pharmaceuticals is that government agencies (e.g., Medicare) should be allowed to use their market power and negotiate lower drug prices. Other recommendations include reducing incentives to use costly pharmaceuticals, eliminating direct-to-consumer advertising, reforming health insurance plan structure, and re-evaluating discount programs (e.g., 340B) to ensure that participating facilities are meeting the program’s goal of helping vulnerable patients. Although the Trump Administration’s plan reflected some of the proposals from the National Academies document, they fall short of being firm resolutions. Dr. Watanabe stated that:

The key elements required are transparencyand informed public dialogue.  If we could shed light on the actual flow of the dollars and the practices used that absorb spending, then rational approaches can be taken to help patients better get the care they deserve and society to devise a sustainable system for delivering care by medications. It’s too hard to measure in the dark.

Despite the criticisms, the Trump Administration’s announcement indicates that the nation is finally beginning to address the problem of unchecked increases in drug costs. The challenge will be to implement effective policies that continue to encourage innovation while addressing rising costs in a timely manner. However, Secretary of Health and Human Services, Alex M. Azar, cautioned that any dramatic change would take months if not years to implement.

As drug prices continue to increase, U.S. citizens have to continue shouldering the economic burden of an inefficient health care market. Health care policy makers agree that this is not sustainable, and that wide-scale reform is needed.Additionally, more nonpartisan discussion is needed to develop health care reforms that benefit the vast majority of U.S. citizens. Whether the Trump Administration’s plan is going to make an impact remains to be seen.

 

Test performance estimates without a gold standard: a short tutorial on JAGS

800px-Bayes'_Theorem_MMB_01
Photo credit: Flickr user mattbuck

One of the most unique applications of Bayesian statistics is in finding estimates for unknown values that depend upon other unknown values. By taking advantage of the Bayesian ability to integrate prior knowledge into its models, you can develop parameter estimates using priors that are little more than a guess.

This application of Bayesian statistics is commonly seen in diagnostics. When there isn’t a gold standard test that allows simple comparisons, Bayesian models are able to use data on test results to estimate the performance of these tests and the prevalence of the disease. Whether it’s a new test or a new population where the test is unproven, these analyses allow us to glimpse important aspects of diagnostic usage with only scant data.

The pioneering paper that developed these methods is titled “Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard” by Lawrence Joseph, Theresa Gyorkos, and Louis Coupal. They collected the results of two tests for the Strongyloides parasite among Cambodian immigrants to Canada in the 70s. Since there was no knowledge of how common the parasite was in this group, they used an uninformative prior for its prevalence, but were able to solicit vague priors about the two tests’ performance from clinical experts. From these priors they built distributions which they then ran through a Gibbs sampler.

A Gibbs sampler is a program that runs repeated sampling to find the parameters – in our case, test performance and prevalence – that would make the most sense in light of the data we have. Because of the way that the sampler moves from parameter estimate to parameter estimate, it devotes most of its samples to high likelihood scenarios. Therefore, the parameter estimates are essentially histograms of the number of samples that the algorithm has run for each parameter value.

JAGS is a commonly used Gibbs sampler, and its name stands for “Just Another Gibbs Sampler.” It’s not the only one, but its got a convenient R interface and a lot of literature to support its use. I recently used JAGS in a tutorial on its R interface that recreates the Joseph, Gyorkos, and Coupal paper. You don’t need any datasets to run it, as you can easily simulate the inputs of the two Strongyloides tests.

The first part deals with gathering estimates from the two different parasite tests independently.  This means building models of the test results as Bernoulli samples of a distribution that depends on the tests’ sensitivity and specificity, as well as the disease prevalence.

The second half of the tutorial deals with learning to use the data from the two tests together. This is significantly more complex, as we need to model the joint probability of each possible combination of the two tests together. To do this, we’ll need to read in the results of the tests on each patient. However, since we’re reading in the results directly, we can’t assign a distribution to them. Rather, we’ll learn to create a likelihood that is directly observed from the data and to ensure that our new likelihood affects the model.

To learn more and see the full details, go check out the tutorial on my GitHub page and feel free to ask me any questions that come to mind!