Dr. Mark Bounthavong’s Talk on Formulating Good Research Questions

by Enrique M. Saldarriaga and Jacinda Tran

On April 16, 2020, the ISPOR Student Chapter at the University of Washington hosted a webinar on how to formulate good research questions featuring Dr. Mark Bounthavong, PhD, PharmD, MPH. He discussed aspects of compelling research questions, shared his formulating process, presented best practices, and provided recommendations for students at all stages of their career.

Dr. Bounthavong is a graduate of the UW CHOICE Institute and a prolific researcher with several years of experience in HEOR. He currently serves as a health economist at the VA Health Economics Resource Center and a Research Affiliate at Stanford University, and his research interests include pharmacoeconomics, outcomes research, health economics, process and program evaluations, econometric methods, and evidence synthesis using Bayesian methods.

Our UW Student Chapter thanks Dr. Bounthavong for his insightful presentation and hopes our fellow researchers find this recording of his presentation to be a helpful resource.

Note: Dr. Bounthavong has authorized the publication of his talk in this post.

The Utility Function, Indifference Curves, and Healthcare

By Brennan T. Beal

Impetus For The Post

When I first learned about utility functions and their associated indifference curves, I was shown an intimidating figure that looked a bit like the image below. If you were lucky, you were shown a computer generated image. The less fortunate had a professor furiously scribbling them onto a board.

https://opentextbc.ca/principlesofeconomics/back-matter/appendix-b-indifference-curves/

A few things were immediately of concern: why are there multiple indifference curves for one function if it only represents one consumer? Why are the curves moving? And… who is Natasha? So, while answering my own questions, I thought sharing the knowledge would be helpful. This post will hopefully provide a better description than maybe most of us have heard and by the end you will understand:

  1. What indifference curves are and what they represent
  2. How a budget constraint relates to these indifference curves and the overall utility function
  3. How to optimize utility within these constraints (if you’re brave)

For the scope of this post, I’ll assume you have some fundamental understanding of utility theory.

Click here to link to my original post to continue reading.

Expected Loss Curves

By Brennan T. Beal, PharmD

The Second Panel on Cost-Effectiveness in Health and Medicine recommends model uncertainty be reflected by displaying the cost-effectiveness acceptability curve (CEAC) with the cost-effectiveness acceptability frontier (CEAF) overlaid (more on this can be seen here). However, on top of being relatively difficult to interpret, these graphical representations may miss a crucial part of decision-making: risk.

A risk-neutral approach to decision-making would mean choosing a strategy that is most likely to be cost-effective despite what one stands to lose economically when the strategy is not cost-effective. Though, we know that decision-makers are often not risk-neutral. With this in mind, selecting a strategy based solely on the probability of being cost-effective could expose a decision-maker to unnecessary risks. It is not always the case that the most likely to be cost-effective is truly the optimal decision; notably, the optimal decision should be thought of as the strategy with the lowest expected loss.

Consider the following example:

Let us suppose that you want to compare two strategies (Strategy A and Strategy B) to see which will be optimal for your company. Your head statistician informs you that Strategy A will be cost-effective 70% of the time and in the 70 times out of 100 that it is cost-effective, you stand to gain $5 dollars each time (i.e., you lose $0 each of those 70 times). She then proceeds to tell you that for every time you are wrong (30% of the time) you stand to lose $100. Your expected loss would be $30 (30% of the time losing $100). With that in mind, you also calculate the expected loss for Strategy B. Turns out it is only $7! ($7 is arbitrary for the sake of example).

In this example, Strategy B would be favored on the CEAF given that it has the lowest expected loss but the CEAC would have shown it to be less likely. So, having the CEAF at least informs us what strategy is optimal, but we are still left with a relatively confusing picture of cost-effectiveness.

Below are three hypothetical distributions of the incremental net benefit (INB) of Strategy B when compared to Strategy A. Simply stated, the INB curves can be thought of as the probabilities of monetary outcomes when comparing strategies.

brennan figure 1

This example is described in greater detail in my recent blog entry on the topic of expected loss. For each distribution above, Drug B is considered optimal as it has the lowest expected loss in each scenario. However, in situations where the mean and median have opposite signs (such as in the case of the right skewed blue curve above, mean INB of $90 vs. a median of -$75), only considering the most likely to be cost-effective will not provide a decision-maker with the optimal decision. For the blue curve above, Drug B has a lesser chance of being cost-effective (46%), but an expected loss of $271 vs. $361 for Drug A.

Expected loss curves (ECLs) account for the probability that a strategy is not cost-effective and how drastic the consequences are in those scenarios. The ELC represents the optimal strategy at each willingness-to-pay threshold and provides a much clearer picture of risk for more informed decision-making.

In my full blog entry, I cover:

  1. An in-depth explanation of ELCs;
  2. A working example of the mathematics and associated R code;
  3. and an interactive example at the end so you can see for yourself

 

 

 

Are private insurers in the United States using cost-effectiveness analysis evidence to set their formularies?

theme pic premera

By Elizabeth Brouwer

As prices rise for many prescription drugs in the United States (US), stakeholders have made efforts to curb the cost of medications with varying degrees of success. One option put forth to contain drug spending is to connect drug coverage and cost-sharing to value, with cost-effectiveness analysis being one of the primary measures of drug value.

In 2010, a payer in the Pacific Northwest implemented a formulary where cost-sharing for prescription drugs was driven by cost-effectiveness evidence. This value-based formulary (VBF) had 5 tiers based on cost-effectiveness ranges determining a patient’s copay amount, aka their level of cost-sharing (Table 1). There was allotment for special cases where a drug had no alternatives or treated a sensitive population, however a majority of the drugs fell within each of these categories. Later analysis found that this VBF resulted in a net (including both payer and patient) decrease in medication expenditures of $8 per member per month, with no change in total medication or health services utilization. A 2018 literature review found slightly different (but still optimistic) results, that value-based formulary design programs increased medication adherence without increasing health spending.

TABLE 1. FORMULARY DESIGNTable 1

Given the potential benefits of implementing value-based cost-sharing for prescription drugs, we wanted to know if other private payers in the US were using cost-effectiveness value evidence to set their drug formularies. If private payers were “moving toward value,” we would expect to see cost-sharing for high-value drugs getting cheaper relative to cost-sharing for low-value drugs (Figure 1).

FIGURE 1. OUR HYPOTHESISFigure 1

To test this theory, we used claims data from a large portion of Americans with private, employee-sponsored health insurance to find the average out-of-pocket cost for each prescription drug in each year from 2010-2013. The collapsed claims data were then linked to the value designation (or “tier”) for each drug. We used a random effects model to see how out-of-pocket costs changed each year in each cost-effectiveness category. (For more details on our methods, please check out our paper, which was recently published in PharmacoEconomics journal).

The results revealed a few interesting trends.

Cost-sharing for prescription drugs was trending toward value in those years, but in a very specific way. Average cost-sharing across all “tiers” decreased over the time frame, and drugs with cost-effectiveness ratios below $10,000 per quality-adjusted life-year (QALY) were getting cheaper at a faster rate than those with cost-effectiveness ratios above that threshold. But there was no distinction in cost-sharing for drugs within those two groups, even accounting for generic status.

Additionally, the movement toward value that we saw was largely the result of increased use of generic drugs, rather than an increased use of more cost-effective drugs. Splitting the data by generic status showed that we are not using higher value drugs within generic and brand name categories (Figure 2).

FIGURE 2. AVERAGE ICERs OVER TIME

Figure 2Source

Our results indicate that there is probably space in private drug formularies to further encourage the use of higher value drugs options and, conversely, to further discourage use of lower-value drug options. This is particularly true for drugs with ICERs in the range of $10,000-$150,000 per QALY and above, where payers are largely ignoring differences in value.

One limitation of the analysis was that it was restricted to years 2010-2013. Whether private payers in the US have increased their use of value information since the implementation of the Affordable Care Act in 2014, or in response to continually rising drug prices, is an important question for further research.

In conclusion, there is evidence indicating payers have an opportunity to implement new (or expand existing) VBF programs. These programs have the potential to protect patient access to effective medical treatments while addressing issues with affordability in the US health care system.

 

Some challenges of working with claims databases

By Nathaniel Hendrix

Real-world evidence has become increasingly important as a data source for comparative effectiveness research, drug safety research, and adherence studies, among other types of research. In addition to sources such as electronic medical records, mobile data, and disease registries, much of the real-world evidence we use comes from large claims databases like Truven Health MarketScan or IQVIA, which record patients’ insurance claims for services and drugs. The enormous size of these databases means that researchers can detect subtle safety signals or study rare conditions where they may not have been able to previously.

Using these databases is not without its challenges, though. In this article, I’ll be discussing a few challenges that I’ve encountered as I’ve worked with faculty on a claims database project in the past year. It’s important for researchers to be aware of these limitations, as they necessarily inform our understanding of how claims-based studies should be designed and interpreted.

Challenge #1: Treatment selection bias

Treatment selection bias occurs when patients are assigned to treatment based on some characteristic that also affects the outcome of interest. If patients with more severe disease are assigned to Drug A rather than Drug B, patients using Drug A may have worse outcomes and we might conclude that Drug B is more effective. Alternatively, if patients with a certain comorbidity are preferentially prescribed a different drug than those patients without the comorbidity – an example of channeling bias – we may conclude that this drug is associated with this comorbidity.

These conclusions would be too hasty, though. What we’d like to do is to simulate a randomized trial, where patients are assigned to treatment without regard for their personal characteristics. Methods such as propensity scores give us this option, but these methods often unavailable to researchers working with claims data. This is because many disease characteristics are not recorded in claims data.

An example might clarify this: imagine that you’re trying to assess the effect of HAART (highly active anti-retroviral therapy) on mortality in HIV patients. Disease characteristics such as CD4 count would be associated with both use of HAART and mortality, but are not recorded in claims data. We could adjust our analysis for other factors such as age and time since diagnosis, but our result would be biased. It’s important, therefore, to understand whether any covariates affect both treatment assignment and the outcome of interest, and to consider other data sources (such as disease registries) if they do.

Challenge #2: Claims data don’t include how the prescription was written

The nature of pharmacy claims data is to record when patients pick up their medications. This creates excellent opportunities for studying resource use and adherence, but these data, unfortunately, lack information about when and how the prescription for these medications was written.

One effect of this is that we don’t know how much time passes between a drug’s being prescribed and when it’s first used. Clearly, if several months pass between the initial prescription and a patient finally picking up that drug from the pharmacy, that would be time spent in non-adherence. We’re not able to capture that time, though. In the case of primary non-adherence, where a prescription is written for a drug that is never picked up at all, this behavior cannot be detected, potentially interfering with our ability to understand the causes of adverse outcomes and to assess the need for interventions that can improve adherence.

Challenge #3: Errors in days’ supply

Days’ supply is essential for calculating adherence and resource use, but errors sometimes appear that can be difficult to work with. Sometimes these are clear entry errors. For example, if a technician enters 310 days instead of 30 days. The payer usually rejects claims made with unusual days’ supply, but some such claims remain in the database.

Another issue is that certain errors in the days’ supply of drugs can be impossible to interpret. For example, if a drug is usually dispensed with an 84-day supply (i.e., 12 weeks) and a claim appears that has a 48-day supply, it’s impossible to know whether the prescriber had escalated the dose or the pharmacy staff had accidentally entered the days’ supply incorrectly. This is one of several reasons why it’s important to carefully consider imposing restrictions on the days’ supply for claims if this parameter is relevant to your research.

Errors such as these can significantly impact analyses that work with days’ supply of prescriptions, so it’s essential to be proactive about looking for cases where the days’ supply is not realistic or interpretable. Consider setting a realistic range to truncate days’ supply before you undertake your analysis.

Challenge #4: Generalizing results from claims studies can be difficult

Claims databases are usually grouped by insurance type. For example, the commercial claims database only contains encounters by commercially-insured patients and their dependents while excluding patients insured by Medicare and/or Medicaid. They may also only include Medicare patients with supplementary insurance. Separating these populations into different databases can make it difficult and sometimes unaffordable for researchers to produce generalizable results as well as introducing complexity due to the need for merging databases.

These populations are all quite different from each other: commercially-insured enrollees are generally healthier than Medicaid enrollees of the same age. And the “dual-eligibles” – enrollees in both Medicare and Medicaid – are different from individuals enrolled in just one of these programs. Since it’s costly and sometimes infeasible to capture all of these patients in a single analysis, you may need to hone your research question carefully so it can be answered by a single database instead of trying to access them all. Fortunately, sampling weights are now common, which helps generalize within your age and insurance grouping even if they are somewhat cumbersome to work with.

In summary, claims databases have added immeasurable value to several fields of research by collecting information on the real-world behavior of clinicians and patients. Still, there are some significant challenges that need to be taken into account when considering using claims data. Finding a good scientific question that suits these data means understanding their limitations. These are a few of the most important ones, but anyone who works with these data long enough will be sure to discover challenges unique to their own research program.