Dr. Mark Bounthavong’s Talk on Formulating Good Research Questions

by Enrique M. Saldarriaga and Jacinda Tran

On April 16, 2020, the ISPOR Student Chapter at the University of Washington hosted a webinar on how to formulate good research questions featuring Dr. Mark Bounthavong, PhD, PharmD, MPH. He discussed aspects of compelling research questions, shared his formulating process, presented best practices, and provided recommendations for students at all stages of their career.

Dr. Bounthavong is a graduate of the UW CHOICE Institute and a prolific researcher with several years of experience in HEOR. He currently serves as a health economist at the VA Health Economics Resource Center and a Research Affiliate at Stanford University, and his research interests include pharmacoeconomics, outcomes research, health economics, process and program evaluations, econometric methods, and evidence synthesis using Bayesian methods.

Our UW Student Chapter thanks Dr. Bounthavong for his insightful presentation and hopes our fellow researchers find this recording of his presentation to be a helpful resource.

Note: Dr. Bounthavong has authorized the publication of his talk in this post.

Some challenges of working with claims databases

By Nathaniel Hendrix

Real-world evidence has become increasingly important as a data source for comparative effectiveness research, drug safety research, and adherence studies, among other types of research. In addition to sources such as electronic medical records, mobile data, and disease registries, much of the real-world evidence we use comes from large claims databases like Truven Health MarketScan or IQVIA, which record patients’ insurance claims for services and drugs. The enormous size of these databases means that researchers can detect subtle safety signals or study rare conditions where they may not have been able to previously.

Using these databases is not without its challenges, though. In this article, I’ll be discussing a few challenges that I’ve encountered as I’ve worked with faculty on a claims database project in the past year. It’s important for researchers to be aware of these limitations, as they necessarily inform our understanding of how claims-based studies should be designed and interpreted.

Challenge #1: Treatment selection bias

Treatment selection bias occurs when patients are assigned to treatment based on some characteristic that also affects the outcome of interest. If patients with more severe disease are assigned to Drug A rather than Drug B, patients using Drug A may have worse outcomes and we might conclude that Drug B is more effective. Alternatively, if patients with a certain comorbidity are preferentially prescribed a different drug than those patients without the comorbidity – an example of channeling bias – we may conclude that this drug is associated with this comorbidity.

These conclusions would be too hasty, though. What we’d like to do is to simulate a randomized trial, where patients are assigned to treatment without regard for their personal characteristics. Methods such as propensity scores give us this option, but these methods often unavailable to researchers working with claims data. This is because many disease characteristics are not recorded in claims data.

An example might clarify this: imagine that you’re trying to assess the effect of HAART (highly active anti-retroviral therapy) on mortality in HIV patients. Disease characteristics such as CD4 count would be associated with both use of HAART and mortality, but are not recorded in claims data. We could adjust our analysis for other factors such as age and time since diagnosis, but our result would be biased. It’s important, therefore, to understand whether any covariates affect both treatment assignment and the outcome of interest, and to consider other data sources (such as disease registries) if they do.

Challenge #2: Claims data don’t include how the prescription was written

The nature of pharmacy claims data is to record when patients pick up their medications. This creates excellent opportunities for studying resource use and adherence, but these data, unfortunately, lack information about when and how the prescription for these medications was written.

One effect of this is that we don’t know how much time passes between a drug’s being prescribed and when it’s first used. Clearly, if several months pass between the initial prescription and a patient finally picking up that drug from the pharmacy, that would be time spent in non-adherence. We’re not able to capture that time, though. In the case of primary non-adherence, where a prescription is written for a drug that is never picked up at all, this behavior cannot be detected, potentially interfering with our ability to understand the causes of adverse outcomes and to assess the need for interventions that can improve adherence.

Challenge #3: Errors in days’ supply

Days’ supply is essential for calculating adherence and resource use, but errors sometimes appear that can be difficult to work with. Sometimes these are clear entry errors. For example, if a technician enters 310 days instead of 30 days. The payer usually rejects claims made with unusual days’ supply, but some such claims remain in the database.

Another issue is that certain errors in the days’ supply of drugs can be impossible to interpret. For example, if a drug is usually dispensed with an 84-day supply (i.e., 12 weeks) and a claim appears that has a 48-day supply, it’s impossible to know whether the prescriber had escalated the dose or the pharmacy staff had accidentally entered the days’ supply incorrectly. This is one of several reasons why it’s important to carefully consider imposing restrictions on the days’ supply for claims if this parameter is relevant to your research.

Errors such as these can significantly impact analyses that work with days’ supply of prescriptions, so it’s essential to be proactive about looking for cases where the days’ supply is not realistic or interpretable. Consider setting a realistic range to truncate days’ supply before you undertake your analysis.

Challenge #4: Generalizing results from claims studies can be difficult

Claims databases are usually grouped by insurance type. For example, the commercial claims database only contains encounters by commercially-insured patients and their dependents while excluding patients insured by Medicare and/or Medicaid. They may also only include Medicare patients with supplementary insurance. Separating these populations into different databases can make it difficult and sometimes unaffordable for researchers to produce generalizable results as well as introducing complexity due to the need for merging databases.

These populations are all quite different from each other: commercially-insured enrollees are generally healthier than Medicaid enrollees of the same age. And the “dual-eligibles” – enrollees in both Medicare and Medicaid – are different from individuals enrolled in just one of these programs. Since it’s costly and sometimes infeasible to capture all of these patients in a single analysis, you may need to hone your research question carefully so it can be answered by a single database instead of trying to access them all. Fortunately, sampling weights are now common, which helps generalize within your age and insurance grouping even if they are somewhat cumbersome to work with.

In summary, claims databases have added immeasurable value to several fields of research by collecting information on the real-world behavior of clinicians and patients. Still, there are some significant challenges that need to be taken into account when considering using claims data. Finding a good scientific question that suits these data means understanding their limitations. These are a few of the most important ones, but anyone who works with these data long enough will be sure to discover challenges unique to their own research program.

Updated estimates of cost-effectiveness for plaque psoriasis treatments

Along with co-authors from ICER and The CHOICE Institute, I recently published a paper in JMCP titled, “Cost-effectiveness of targeted pharmacotherapy for moderate-to-severe plaque psoriasis.” In this publication, we sought to update estimates of cost-effectiveness for systemic therapies useful in the population of patients with psoriasis for whom methotrexate and phototherapy are not enough.

Starting in 1998, a class of drugs acting on Tumor Necrosis Factor alpha (TNFɑ) has been the mainstay of psoriasis treatment in this population. The drugs in this class, including adalimumab, etanercept, and infliximab, are still widely used due to their long history of safety and lower cost than some competitors. They are less effective than many new treatments, however, particularly drugs inhibiting interleukin-17 such as brodalumab, ixekizumab, and secukinumab.

This presents a significant challenge to decision-makers: is it better to initiate targeted treatment with a less effective, less costly option, or a more effective, costlier one? We found that the answer to this question is complicated by several current gaps in knowledge. First, there is some evidence that prior exposure to biologic drugs is associated with lower effectiveness in subsequent biologics. This means that the selection of a first targeted treatment must balance cost considerations with the possibility of losing effectiveness in subsequent targeted treatments if the first is not effective.

A related issue is that the duration of effectiveness (or “drug survival”) for each of these drugs is currently poorly characterized in the US context. Drug discontinuation and switching is significantly impacted by policy considerations such as requirements for step therapy and restrictions on dose escalation. Therefore, while there is a reasonable amount of research about drug survival in Europe, it is not clear how well this information translates to the US.

Another difficulty of performing cost-effectiveness research in this disease area is the difficulty of mapping utility weights onto trial outcomes. Every drug considered in our analysis used percentage change in the Psoriasis Area Severity Index (PASI) over baseline. Because this is not an absolute measure, it required that we assume that patients have comparable baseline PASI scores between studies. In other words, we had to assume that a given percent improvement in PASI was equivalent to a given increase in health-related quality of life. This means that if one study’s population had less severe psoriasis at baseline, we probably overstated the utility benefit of that drug.

In light of these gaps in knowledge, our analytic strategy was to model a simulated cohort of patients with incident use of targeted drugs. After taking a first targeted drug, they could be switched to a second targeted drug or cease targeted therapy. We made the decision to limit patients to two lines of targeted treatment in order to keep the paper focused on the issue of initial treatment.

pso cost effectiveness frontier

What we found is a nuanced picture of cost-effectiveness in this disease area. In agreement with older cost-effectiveness studies, we found that infliximab is the most cost-effective TNFɑ drug and, along with the PDE-4 inhibitor apremilast, is likely to be the most cost-effective treatment at lower willingness-to-pay (WTP) thresholds. However, at higher WTP thresholds of $150,000 per quality-adjusted life year and above, we found that the IL-17 inhibitors brodalumab and secukinumab become more likely to be the most cost-effective.

The ambiguity of these results suggests both the importance of closing the gaps in knowledge mentioned above and of considering factors beyond cost-effectiveness in coverage decisions. For example, apremilast is the only oral drug we considered and patients may be willing to trade lower effectiveness to avoid injections. Another consideration is that IL-17 inhibitors are contraindicated for patients with inflammatory bowel disease, suggesting that payers should make a variety of drug classes accessible in order to provide for all patients.

In summary, these results should be seen as provisional, not only because many important parameters are still uncertain, but also because several new drugs and biosimilars for plaque psoriasis are nearing release. Decision-makers will need to keep an eye on emerging evidence in order to make rational decisions about this costly and impactful class of drugs.

Open source value assessment needs open source economics

IVI_logoThree members of the Innovation and Value Initiative (IVI) recently published a paper entitled “Open-Source Tools for Value Assessment: A Promising Approach” in the Journal of Clinical Pathways. This paper lays out, in brief, some of the ways that open-source models can contribute to the challenging environment in which value assessment operates in the US.

Unlike many nations where cost-effectiveness analysis is widely used and accepted, the US has a highly decentralized healthcare system. Even when up to date US-based models are available, they are likely not applicable to every patient population. This matters because not only does treatment response vary between populations, but so does the conception of value.

Meanwhile, healthcare decision makers must assess what evidence on value exists while simultaneously trying to assess its applicability to their patients, all without robust guidance on how to adapt the conclusions of modeling studies.

IVI has tried to change this by releasing an open-source microsimulation model for rheumatoid arthritis – a common disease whose treatment with biologics has become a significant driver of drug costs for many payers. This model is extremely flexible and speaks to the needs of healthcare decision makers by allowing for modification of treatment sequences, elements considered in the definition of value, and even whether results are formatted as a cost-effectiveness analysis or a multi-criteria decision analysis. Better still, this software is released as both a convenient web-app and as an R package with fully open code.

This is a tremendous step forward for value assessment in the US and sets a new standard for openness in modeling. Still, I can’t help but wonder how this transition from proprietary, closed models to open models will be funded. After all, IVI is in a unique position, with funding from many large pharmaceutical companies and industry organizations. If every consulting company had to organize a consortium to fund its open-source modeling initiatives, this would quickly become very burdensome.

As the “Open-Source Tools” paper points out, IVI took its inspiration for its rheumatoid arthritis model from open-source software, and we can do the same in thinking about how open-source modeling efforts could be supported. Some companies who develop open-source software support themselves by offering paid support plans for their products. A typical example here would be Canonical, which develops the Ubuntu Linux distribution. While it offers its operating system for free to anyone who wants it, it also offers paid plans that include help with deployment and maintenance.

It’s hard to know whether the scale of a typical model’s distribution would allow for this source of income, though. While Linux users number in the millions, a typical value model may have just dozens of users. Competition is likely to be important to motivate the timely development and updating of models, but the question of funding needs to be solved before more developers can take part.

The real value of an open source model depends too on the data it uses. To truly customize a model to a patient population, more granular data on patient response needs to be made available from clinical trials and disease registries. Until this happens, the conclusions of models may be based on estimated shifts in response from small samples.

The shift toward open-source modeling is an important means of responding to the challenges presented by the US healthcare market. However, many problems remain unsolved that for now still prevent more models from being developed in an open and flexible way.

Health economics in five words

There was recently a Twitter trend of people trying to describe programming in five words. The responses ranged from funny to puzzling to inspiring.

At our celebration of the end of the academic year, students, post-docs and faculty of the CHOICE Institute decided held a similar contest at our weekly seminar, and instructed attendees to “describe health economics and outcomes research (HEOR) in five words.” Here are a few of my favorite entries:

  • “How to hurt less, cheap.” (Samantha Clark)
  • “Math predicting value of medicine.” (Blythe Adamson)
  • “Examining well-being trade-offs and technology.” (Doug Barthold)
  • “Yo, treat sick people cost-effectively.” (Shuxian Chen)
  • “To each his own evaluation.” (Nobody claimed this one, unfortunately!)
  • “What is beyond opportunity cost?” (Enrique Saldarriaga)

As researchers in HEOR, the challenge of trying to explain what we do to outsiders is familiar to us all. That’s why it was really fun to try to encapsulate our field into just a few words.