Updated estimates of cost-effectiveness for plaque psoriasis treatments

Along with co-authors from ICER and The CHOICE Institute, I recently published a paper in JMCP titled, “Cost-effectiveness of targeted pharmacotherapy for moderate-to-severe plaque psoriasis.” In this publication, we sought to update estimates of cost-effectiveness for systemic therapies useful in the population of patients with psoriasis for whom methotrexate and phototherapy are not enough.

Starting in 1998, a class of drugs acting on Tumor Necrosis Factor alpha (TNFɑ) has been the mainstay of psoriasis treatment in this population. The drugs in this class, including adalimumab, etanercept, and infliximab, are still widely used due to their long history of safety and lower cost than some competitors. They are less effective than many new treatments, however, particularly drugs inhibiting interleukin-17 such as brodalumab, ixekizumab, and secukinumab.

This presents a significant challenge to decision-makers: is it better to initiate targeted treatment with a less effective, less costly option, or a more effective, costlier one? We found that the answer to this question is complicated by several current gaps in knowledge. First, there is some evidence that prior exposure to biologic drugs is associated with lower effectiveness in subsequent biologics. This means that the selection of a first targeted treatment must balance cost considerations with the possibility of losing effectiveness in subsequent targeted treatments if the first is not effective.

A related issue is that the duration of effectiveness (or “drug survival”) for each of these drugs is currently poorly characterized in the US context. Drug discontinuation and switching is significantly impacted by policy considerations such as requirements for step therapy and restrictions on dose escalation. Therefore, while there is a reasonable amount of research about drug survival in Europe, it is not clear how well this information translates to the US.

Another difficulty of performing cost-effectiveness research in this disease area is the difficulty of mapping utility weights onto trial outcomes. Every drug considered in our analysis used percentage change in the Psoriasis Area Severity Index (PASI) over baseline. Because this is not an absolute measure, it required that we assume that patients have comparable baseline PASI scores between studies. In other words, we had to assume that a given percent improvement in PASI was equivalent to a given increase in health-related quality of life. This means that if one study’s population had less severe psoriasis at baseline, we probably overstated the utility benefit of that drug.

In light of these gaps in knowledge, our analytic strategy was to model a simulated cohort of patients with incident use of targeted drugs. After taking a first targeted drug, they could be switched to a second targeted drug or cease targeted therapy. We made the decision to limit patients to two lines of targeted treatment in order to keep the paper focused on the issue of initial treatment.

pso cost effectiveness frontier

What we found is a nuanced picture of cost-effectiveness in this disease area. In agreement with older cost-effectiveness studies, we found that infliximab is the most cost-effective TNFɑ drug and, along with the PDE-4 inhibitor apremilast, is likely to be the most cost-effective treatment at lower willingness-to-pay (WTP) thresholds. However, at higher WTP thresholds of $150,000 per quality-adjusted life year and above, we found that the IL-17 inhibitors brodalumab and secukinumab become more likely to be the most cost-effective.

The ambiguity of these results suggests both the importance of closing the gaps in knowledge mentioned above and of considering factors beyond cost-effectiveness in coverage decisions. For example, apremilast is the only oral drug we considered and patients may be willing to trade lower effectiveness to avoid injections. Another consideration is that IL-17 inhibitors are contraindicated for patients with inflammatory bowel disease, suggesting that payers should make a variety of drug classes accessible in order to provide for all patients.

In summary, these results should be seen as provisional, not only because many important parameters are still uncertain, but also because several new drugs and biosimilars for plaque psoriasis are nearing release. Decision-makers will need to keep an eye on emerging evidence in order to make rational decisions about this costly and impactful class of drugs.

Economic Evaluation Methods Part I: Interpreting Cost-Effectiveness Acceptability Curves and Estimating Costs

By Erik Landaas, Elizabeth Brouwer, and Lotte Steuten

One of the main training activities at the CHOICE Institute at the University of Washington is to instruct graduate students how to perform economic evaluations of medical technologies. In this blog post series, we give a brief overview of two important economic evaluation concepts. Each one of the concepts are mutually exclusive and are meant to stand alone. The first of this two-part series describes how to interpret a cost-effectiveness acceptability curve (CEAC) and then delves into ways of costing a health intervention. The second part of the series will describe two additional concepts: how to develop and interpret cost-effectiveness frontiers and how multi-criteria decision analysis (MCDA) can be used in Health Technology Assessment (HTA).

 

Cost-Effectiveness Acceptability Curve (CEAC)

The CEAC is a way to graphically present decision uncertainty around the expected incremental cost-effectiveness of healthcare technologies. A CEAC is created using the results of a probabilistic analysis(PA).[1] PA involves simultaneously drawing a set of input parameter values by randomly sampling from each parameter distribution, and then storing the model results.  This is repeated many times (typically 1,000 to 10,000), resulting in a distribution of outputs that can be graphed on the cost-effectiveness plane. The CEAC reflects the proportion of results that are considered ‘favorable’ (i.e. cost effective) in relation to a given cost-effectiveness threshold.

The primary goal of a CEAC graph is to inform coverage decisions among payers that are considering a new technology, comparing one or more established technologies that may include the standard of care. The CEAC enables a payer to determine, over a range of willingness to pay (WTP) thresholds, the probability that a medical technology is considered cost-effective in comparison to its appropriate comparator (e.g. usual care), given the information available at the time of the analysis. A WTP threshold is generally expressed in terms of societal willingness to pay for an additional life year or quality-adjusted life year (QALY) gained. In the US, WTP thresholds typically range between $50,000 – $150,000 per QALY.

The X-axis of a CEAC represents the range of WTP thresholds. The Y-axis represents the probability of each comparator being cost-effective at a given WTP threshold, and ranges between 0% and 100%. Thus, it simply reflects the proportion of simulated ICERs from the PA that fall below the corresponding thresholds on the X-axis.

Figure 1. The Cost-Effectiveness Acceptability Curve

CEAC

Coyle, Doug, et al. “Cost-effectiveness of new oral anticoagulants compared with warfarin in preventing stroke and other cardiovascular events in patients with atrial fibrillation.” Value in health 16.4 (2013): 498-506.

Figure 1 shows CEACs for five different drugs, making it easy for the reader to see that at the lower end of the WTP threshold range (i.e. $0 – $20,000 per QALY), warfarin has the highest probability to be cost-effective (or in this case “optimal”). At WTP values >$20,000 per QALY, dabigatran has the highest probability to be cost-effective. All the other drugs have a lower probability of being cost-effective compared to warfarin and dabigatran at every WTP threshold. The cost-effectiveness acceptability frontier in Figure 1 follows along the top of all the curves and shows directly which of the five technologies has the highest probability of being cost-effective at various levels of the WTP thresholds.

To the extent that the unit price of the technology influences the decision uncertainty, a CEAC can offer insights to payers as well as manufacturers as they consider a value-based price. For example, a lower unit price for the drug may lower the ICER and, all else equal, this increases the probability that the new technology is considered cost-effective at a given WTP threshold. Note, that when new technologies are priced such that the ICER falls just below the WTP for a QALY, (e.g. an ICER of $99,999 when the WTP is $100,000) the decision uncertainty tends to be substantial, often around 50%. If decision uncertainty is perceived to be ‘unacceptably high’, it can be recommended to collect further information to reduce decision uncertainty. Depending on the drivers of decision uncertainty, for example in case of stochastic uncertainty in the efficacy parameters, performance-based risk agreements (PBRAs) or managed entry schemes may be appropriate tools to manage the risk.

Cost estimates

The numerator of most economic evaluations for health is the cost of a technology or intervention. There are several ways to arrive at that cost, and choice of method depends on the context of the intervention and the available data.

Two broadly categorized methods for costing are the bottom-up methodand the top-down method. These methods, described below, are not mutually exclusive and may complement each other, although they often do not produce the same results.

costs

Source of Table: Mogyorosy Z, Smith P. The main methodological issues in costing health care services: a literature review. 2005.

The bottom-up method is also known as the ingredients approach or micro-costing. In this method, the analyst identifies all the items necessary to complete an intervention, such as medical supplies and clinician time, and adds them up to estimate the total cost. The main categories to consider when calculating costs via the bottom-up method are medical costs and non-medical costs. Medical costs can be direct, such as the supplies used to perform a surgery, or indirect, such as the food and bed used for inpatient care. Non-medical costs often include costs to the patient, such as transportation to the clinic or caregiver costs. The categories used when estimating the total cost of an intervention will depend on the perspective the analyst takes (perspectives include patient, health system, or societal).

The bottom-up approach can be completed prospectively or retrospectively, and can be helpful for planning and budgeting. Because the method identifies and values each input, it allows for a clear breakdown as to where dollars are being spent. To be accurate, however, one must be able to identify all the necessary inputs for an intervention and know how to value capital inputs like MRI machines or hospital buildings. The calculations may also become unwieldy on a very large scale. The bottom-up approach is often used in global health research, where medical programs or governmental agencies supply specific items to implement an intervention, or in simple interventions where there are only a few necessary ingredients.

The top-down estimation approach takes the total cost of a project and divides it by the number of service units generated. In some cases, this is completed simply looking at the budget for a program or an intervention and then dividing that total by the number of patients. The top-down approach is useful because it is a simple, intuitive measurement that captures the actual amount of money spent on a project and the number of units produced, particularly for large projects or organizations. Compared to the bottom-up approach, the top-down approach can be much faster and cheaper. The top-down approach can only be used retrospectively, however, and may not allow for the breakdown of how the money was spent or be able to identify variations between patients.

While the final choice will depend on several factors, it makes sense to try and think through (or model) which of the cost inputs are likely to be most impactful on the model results. For example, the costs of lab tests may most accurately be estimated by a bottom-up costing approach. However, if these lab costs are likely to be a fraction of the cost of treatment, say a million dollar cure for cancer, then going through the motions of a bottom-up approach may not be the most efficient way to get your PhD-project done in time. In other cases, however, a bottom-up approach may provide crucial insights that move the needle on the estimated cost-effectiveness of medical technologies, particularly in settings where a lack of existing datasets is limiting the potential of cost-effectiveness studies to inform decisions on the allocation of scarce healthcare resources.

[1]Fenwick, Elisabeth, Bernie J. O’Brien, and Andrew Briggs. “Cost‐effectiveness acceptability curves–facts, fallacies and frequently asked questions.” Health economics 13.5 (2004): 405-415.

Estimating elasticities from linear regressions

By Enrique Saldarriaga

This post aims to show how elasticities can be estimated directly from linear regressions.
Elasticity measures the association between two numeric variables expressed in percentages, whose interpretation is the percentage increase in one variable associated with a 1% change in another one. Elasticities have served economics for a long time, basically because they allow comparison between very different settings when changes in levels (e.g. dollars) are difficult to interpret. Take for instance a company that produces both cars and chocolate bars. If they want to know how changes in the price of both products would impact their demand, in order to determine which price to increase and which to maintain, the comparison would be a mess. A $100 change would mean nothing for the demand of cars but would destroy demand for chocolates. Similarly, a decrease in 100K people in the chocolate bars’ demand is probably just a dent, but the same amount could represent a significant portion of the car’s market. Changes in percentages are a way to standardize change and its consequences. In health economics, elasticities are becoming more common because they are able to show fair comparisons between variables with heterogenous behavior depending on the context.

The most common elasticities are price and income. The income elasticity of any variable would be the percentage change in that variable associated with a 1% change in income; the elasticity price would be the percentage change in a variable associated with a 1% change in its own price. In economics, that variable usually would be the demand of a given good. In health economics, that variable could be any number of things.

For example, let’s say we want to estimate the income elasticity of BMI. The elasticity would be expressed as:

\displaystyle \epsilon_I = \dfrac{\Delta \% BMI}{\Delta \% Income} = \dfrac{\Delta BMI}{\Delta Income}* \dfrac{Income}{BMI}

where ∆ stands for variation and shows the variation of BMI or Income from one point to another in their joint distribution. In order to make this change infinitesimal, and therefore estimate the elasticity in the whole distribution, we can express those changes with differentials: ε_I = dQ/dP*Income/BMI

Now, to obtain elasticities directly from linear regressions we should use the logarithmic forms of the variables:

\displaystyle ln(BMI) = \beta_0 + \beta_1 * ln(Income) = f(Income)

The BMI is a function of income. The function can be expressed as:

\displaystyle f(Income) = BMI = e^{ \beta_0 + \beta_1 * ln(Income)}

To find the change in BMI associated with a change in income we should derive this function by Income. Given the form of the function we use the chain rule:

\displaystyle f(x)=g(h(x)) ; f'(x)=g'(h(x))*h'(x)

Where: \displaystyle g(h) = e^h , h(Income) = \beta_0 + \beta_1 *ln(Income)

\displaystyle g'(h)= e^h ,h'(Income) = \beta_1 * \dfrac{1}{Income}

Then:

\displaystyle f'(Income) = e^{ \beta_0 + \beta_1 * ln(Income)} * \beta_1 * \dfrac{1}{Income}

\displaystyle \dfrac{\partial BMI}{\partial Income} = e^{ln(BMI)} * \beta_1 * \dfrac{1}{Income} = BMI * \beta_1 * \dfrac{1}{Income}

\displaystyle \beta_1 = \dfrac{ \partial BMI}{ \partial Income} * \dfrac{Income}{BMI}

Thus, the β_1 coefficient is the estimation of the income elasticity. The same procedure could be applied to find the price elasticity. It’s interesting to notice that if only the independent variable were in its logarithmic form, the derivation would be easier, and the coefficient would be:

\beta_1 = \dfrac{ \partial BMI}{ \partial Income} * \dfrac{Income}{1}

Then, to estimate the elasticity it would be necessary to divide the coefficient by a measure of BMI, probably the mean to account for the whole distribution, as Sisira et al. did in their paper. However, in this case would be necessary to carefully select a good average measure. This method would be useful if we prefer to interpret other covariates’ coefficients using BMI rather than ln⁡(BMI).

Commonly Misunderstood Concepts in Pharmacoepidemiology

By Erik J. Landaas, MPH, PhD Student and Naomi Schwartz, MPH, PhD Student

 

Epidemiologic methods are central to the academic and research endeavors at the CHOICE institute. The field of epidemiology fosters the critical thinking required for high quality medical research. Pharmacoepidemiology is a sub-field of epidemiology and has been around since the 1970’s. One of the driving forces behind the establishment of pharmacoepidemiology was the Thalidomide disaster. In response to this tragedy, laws were enacted that gave the FDA authority to evaluate the efficacy of drugs. In addition, drug manufacturers were required to conduct clinical trials to provide evidence of a drug’s efficacy. This spawned a new and important body of work surrounding drug safety, efficacy, and post-marketing surveillance.[i]

In this article, we break down three of the more complex and often misunderstood concepts in pharmacoepidemiology: immortal time bias, protopathic bias, and drug exposure definition and measurement.

 

Immortal Time Bias

In pharmacoepidemiology studies, immortal time bias typically arises when the determination of an individual’s treatment status involves a delay or waiting period during which follow-up time is accrued. Immortal time bias is a period of follow-up during which, by design, the outcome of interest cannot occur. For example, the finding that Oscar winners live longer than non-winnersis a result of immortal time bias. In order for an individual to win an Oscar, he/she must live long enough to receive the award.  A pharmacoepidemiology example of this is depicted in Figure 1. A patient who receives a prescription may survive longer because he/she must live long enough to receive a prescription while a patient who does not receive a prescription has no survival requirements.  The most common way to avoid immortal time bias is to use a time-varying exposure variable. This allows subjects to contribute to both unexposed (during waiting period) and exposed person time.

 

Figure 1. Immortal Time Bias

Picture2_pharmepi post.png 

Lévesque, Linda E., et al. “Problem of immortal time bias in cohort studies: example using statins for preventing progression of diabetes.” Bmj 340 (2010): b5087.

Protopathic Bias or Reverse Causation

Protopathic bias occurs when a drug of interest is initiated to treat symptoms of the disease under study before it is diagnosed. For example, early symptoms of inflammatory bowel disease (IBD) are often consistent with the indications for prescribing proton pump inhibitors (PPIs). Thus, many individuals who develop IBD have a history of PPI use. A study to investigate the association between PPIs and subsequent IBD would likely conclude that taking PPIs causes IBD when, in fact, the IBD was present (but undiagnosed) before the PPIs were prescribed.  This scenario is illustrated by the following steps:

  • Patient has early symptoms of an underlying disease (e.g. acid reflux)
  • Patient goes to his/her doctor and gets a drug to address symptoms (e.g. PPI)
  • Patient goes on to develop a diagnosis of having IBD (months or even years later)

It is easy to conclude from the above scenario that PPIs cause IBD, however the acid reflux was actually a manifestation of underlying IBD that was not yet diagnosed.  Protopathic bias occurs in this case because of the lag time between first symptoms and diagnosis. One effective way to address protopathic bias is by excluding exposures during the prodromal period of the disease of interest.

 

Drug Exposure Definition and Measurement 

Defining and classifying exposure to a drug is critical to the validity of pharmacoepidemiology studies. Most pharmacoepidemiology studies use proxies for drug exposure, because it is often impractical or impossible to measure directly (e.g. observing a patient take a drug, monitoring blood levels). In lieu of actual exposure data, exposure ascertainment is typically based on medication dispensing records. These records can be ascertained from electronic health records, pharmacies, pharmacy benefit managers (PBMs), and other available healthcare data repositories. Some of the most comprehensive drug exposure data are available among Northern European countries and large integrated health systems such as Kaiser Permanente in the United States. Some strengths of using dispensing records to gather exposure data are:

  • Easy to ascertain and relatively inexpensive
  • No primary data collection
  • Often available for large sample sizes
  • Can be population based
  • No recall or interviewer bias
  • Linkable to other types of data such as diagnostic codes and labs

Limitations of dispensing records as a data source include:

  • Completeness can be an issue
  • Usually does not capture over-the-counter (OTC) drugs
  • Dispensing does not guarantee ingestion
  • Often lacks indication for use
  • Must make some assumptions to calculate dose and duration of use

Some studies collect drug exposure data using self-report methods (e.g. interviews or surveys). These methods are useful when the drug of interest is OTC and thus not captured by dispensing records. However, self-reported data is subject to recall bias and requires additional considerations when interpreting results. Alternatively, some large epidemiologic studies require patients to bring in all their medications when they do their study interviews (eg. bring your brown bag of medications). This can provide a more reliable method of collecting medication information than self-report.

It is also important to consider the risk of misclassification of exposure. When interpreting results, remember that differential misclassification (different for those with and without disease) can result in either an inflated measure of association, or a measure of association that is closer to the null. In contrast, non-differential misclassification (unrelated to the occurrence or presence of disease) shifts the measure of association closer to the null. For further guidance on defining drug exposure, please look at Figure 2.

 

Figure 2. Checklist: Key considerations for defining drug exposure

Picture3_pharmepi post.png
Velentgas, Priscilla, et al., eds. Developing a protocol for observational comparative effectiveness research: a user’s guide. Government Printing Office, 2013.

As alluded to above, pharmacoepidemiology is a field with complex research methods. We hope this article clarifies these three challenging concepts.

 

 

[i](Pinar Balcik, Gulcan Kahraman “Pharmacoepidemiology.” IOSR Journal of Pharmacy (e)-ISSN: 2250-3013, (p)-ISSN: 2319-4219 Volume 6, Issue 2 (February 2016), PP. 57-62)

Vaccine education in Africa: unmet needs and current programs

Written by Karen Guo.

Disease burden:

Childhood immunization – inducing immunity by applying a vaccine – almost guarantees protection from many major diseases. Childhood vaccination prevents 2 million deaths per year worldwide and is widely consideredto be overwhelmingly good by the scientific community.   However, 2.5 million deaths a year continue to be caused by vaccine-preventable diseases, mainly in Africa and Asia among children less than 5 years old.

Demand-related factors, such as parents’ knowledge about vaccination and immunization and their attitudes towards them, are also likely to influence uptake. What remains unclear, however, is whether people’s attitudes are more strongly influenced by the perceived benefits of vaccination or by the perceived risks of not being vaccinated.

AVW3

Education unmet needs:

Parents’ knowledge about vaccinations is poor, and the knowledge they do have is often wrong. It appears that there is no association between parents’ knowledge and vaccination coverage rates, and the public seems to accept vaccinationdespite limited knowledge about it. One thing is clear, however: when parents resist vaccination, it is because they want to protect their children from harm. In 2003, political and religious leaders in three Nigerian states boycotted a WHO polio vaccination campaign, claiming that the vaccine caused sterility and AIDS. Similarly, certain Hindu and Muslim groups in India have long held the belief that vaccination is a covert method of family planning, primarily targeting Muslims. The greater acceptance of vaccination found among Javanese migrants as opposed to Acehnese villagersin the same area has been attributed to the former’s more positive cultural attitudes towards health. Both groups were found to have an equally poor understanding of vaccination and health in general. Similarly, followers of the Aga Khan in Pakistan were found to be receptive to ‘biomedical’ or ‘western’ medicine and reasoning despite the fact that as a group they were largely illiterate and understood little about vaccination. Cultural receptivity to perceived modernity and education, as well as trust in health workers, were considered to be the most important factors influencing attitudes. In short, knowing little about vaccination does not necessarily translate into negative attitudes towards it; factors such as trust (e.g. in health-care providers or ‘western’ medicine) and culture may be more influential. The impact of high levels of knowledge on subsequent attitudes towards vaccination is unknown.

The fundamental question is whether or not resources should be invested in improving parents’ knowledge of and attitudes towards vaccination. Although the evidence is unclear, it is commonly believed that strengthening advocacy, communication and social mobilization will enhance informed and willing participation in vaccination programs and that vaccination strategies are likely to be more successful if they are based on an understandingof sociocultural behavior.  Yet these approaches are not routinely incorporated into vaccination policy. Since factors influencing demand vary greatly by region and context, findings from one population cannot always be extrapolated to another. Thus, simple operational research into local knowledge and attitudes should become an essential part of every vaccination campaign. Current research into parents’ knowledge and attitudes towards childhood vaccination is disproportionately low considering the enormous scale and relevance of this issue. In order for such efforts to be successful, parents must be empowered to freely and clearly express their attitudes towards childhood vaccination.

Current vaccine education program in Africa:

In line with the principles and areas of work outlined in the Global Vaccine Action Plan and under the advice of both the Strategic Advisory Group of Experts (SAGE) on immunization and the Task Force on Immunization (TFI), WHO/AFRO is taking steps to address vaccine preventable diseases by implementing strategies for reaching all eligible persons with effective vaccines. One of the strategies is the implementation of the African Vaccination Week (AVW), which provides a platform for Member States to speak through one collective voice to advocate for immunization as a public health priority in the Region, and to achieve high immunization coverage. The overarching objective of the initiative is to target people with limited access to regular health services, thereby working to close the gaps in immunization.

The over-arching slogan of AVW is “Vaccinated communities, Healthy communities”. Each year, a suitable theme is chosen to reflect current regional priorities and the public health realities. The first AVW was celebrated in April 2011 under the theme “Put mothers and children first – Vaccinate and stop polio now”. That year, 40 countries in the Region organized activities to celebrate the event. In subsequent years, countries have continued to conduct vaccination campaigns, catch up vaccination activities, conduct health promotion activities, and provide other child survival interventions.

To explore the shape of one AVW in particular, in 2016 they worked under the theme of “Close the immunization gap. Stay polio free!” The kick-off occurred on April 24, 2016, the same day as the kick-off of the World Immunization Week (WIW) and vaccination week in the other 5 WHO regions. The celebration of that year’s AVW also coincided with the globally synchronized switch from trivalent oral polio vaccine (tOPV) to bivalent OPV (bOPV) occurring during the period from April 17 to May 1 2016. It also followed 2 important events that had taken place during the previous 6 months: Nigeria’s removal from the list of polio-endemic countries in September 2015 (highlighting the need for countries to stay vigilant in the fight against polio) and the first ever Ministerial Conference on Immunization in Africa (MCIA) held in February 2016 in Addis Ababa, Ethiopia. The Regional launch was held in Ganta, Nimba County, Liberia on April 25 2016 during a colorful function chaired by the Deputy Minister of Health Services in the presence of high-level officials and community leaders. The event was combined with the celebration of World Malaria Day and the introduction of 2 new vaccines (rotavirus vaccine in the entire country and human papilloma vaccine (HPV) as a demonstration project) into the national immunization schedule. The Ministry of Health with support from the Ministry of Education, UNICEF, and WHO conducted a national quiz on immunization; information on immunization was sent out to schools all over the country to equip students with knowledge prior to the quizzes that were held at district, regional, and national levels. The media were engaged to disseminate messages and information especially on polio, Hepatitis B vaccination, and the HPV vaccine. Increased advocacy for mothers to attend antenatal care and to deliver from health facilities was heightened. Mothers were also being tested for HIV to prevent mother-to-child transmission of HIV.

Integration of other interventions with immunization during AVW in the African Region is common and has shown potential for improving immunization coverage, as this dedicated period is used both for catch-up campaigns and periodic intensified routine immunization. While its impact may call for further examination, it is a potential platform for integrated delivery of health interventions to people with limited access to regular health service.

Open source value assessment needs open source economics

IVI_logoThree members of the Innovation and Value Initiative (IVI) recently published a paper entitled “Open-Source Tools for Value Assessment: A Promising Approach” in the Journal of Clinical Pathways. This paper lays out, in brief, some of the ways that open-source models can contribute to the challenging environment in which value assessment operates in the US.

Unlike many nations where cost-effectiveness analysis is widely used and accepted, the US has a highly decentralized healthcare system. Even when up to date US-based models are available, they are likely not applicable to every patient population. This matters because not only does treatment response vary between populations, but so does the conception of value.

Meanwhile, healthcare decision makers must assess what evidence on value exists while simultaneously trying to assess its applicability to their patients, all without robust guidance on how to adapt the conclusions of modeling studies.

IVI has tried to change this by releasing an open-source microsimulation model for rheumatoid arthritis – a common disease whose treatment with biologics has become a significant driver of drug costs for many payers. This model is extremely flexible and speaks to the needs of healthcare decision makers by allowing for modification of treatment sequences, elements considered in the definition of value, and even whether results are formatted as a cost-effectiveness analysis or a multi-criteria decision analysis. Better still, this software is released as both a convenient web-app and as an R package with fully open code.

This is a tremendous step forward for value assessment in the US and sets a new standard for openness in modeling. Still, I can’t help but wonder how this transition from proprietary, closed models to open models will be funded. After all, IVI is in a unique position, with funding from many large pharmaceutical companies and industry organizations. If every consulting company had to organize a consortium to fund its open-source modeling initiatives, this would quickly become very burdensome.

As the “Open-Source Tools” paper points out, IVI took its inspiration for its rheumatoid arthritis model from open-source software, and we can do the same in thinking about how open-source modeling efforts could be supported. Some companies who develop open-source software support themselves by offering paid support plans for their products. A typical example here would be Canonical, which develops the Ubuntu Linux distribution. While it offers its operating system for free to anyone who wants it, it also offers paid plans that include help with deployment and maintenance.

It’s hard to know whether the scale of a typical model’s distribution would allow for this source of income, though. While Linux users number in the millions, a typical value model may have just dozens of users. Competition is likely to be important to motivate the timely development and updating of models, but the question of funding needs to be solved before more developers can take part.

The real value of an open source model depends too on the data it uses. To truly customize a model to a patient population, more granular data on patient response needs to be made available from clinical trials and disease registries. Until this happens, the conclusions of models may be based on estimated shifts in response from small samples.

The shift toward open-source modeling is an important means of responding to the challenges presented by the US healthcare market. However, many problems remain unsolved that for now still prevent more models from being developed in an open and flexible way.

Alumna interview: Meng Li

Meng Li is a recent graduate of The CHOICE Institute who defended her dissertation, “The Real Option Value of Life and Innovation,” on May 8, 2018. In addition to her dissertation research, she has published on a wide range of topics including the cost-effectiveness of liver transplantation in organic acidemias, risk-sharing agreements involving indication-specific pricing, and the acceptability of cervical cancer screening in rural Uganda. Her website is mlinternational.org.

What was your dissertation about?Genentech-636

My dissertation was about the real option value of medical technologies. The real option value of a technology refers to the value of enabling patients to live longer to be able to take advantage of future breakthroughs.  In my dissertation, I answered two closely related questions: (1) do patients consider real option value when they make their treatment decisions; and (2) how should real option value from technology advancement be incorporated in economic evaluations of medical technologies and what is the potential impact? To answer the first question, I studied metastatic melanoma patients from 2008 to 2011 and examined if they changed their treatment decisions after the announcements of the results of the breakthrough drug ipilimumab’s phase II and phase III clinical testing. The idea behind this was that, if patients considered real option value – the value of taking advantage of future innovations – they might be more likely to undergo treatments that can extend their lives. In my analysis, I found that in anticipation of ipilimumab’s arrival, metastatic melanoma patients were more likely to undergo surgical resection of metastasis, which was consistent with my hypothesis. To answer the second question, I developed methods to project likely future approvals in metastatic melanoma and potential future improvement in mortality in this patient population and incorporated them into a cost-effectiveness analysis. In my analysis, I found that the incremental QALYs increased by about 5-7% after accounting for future innovations, and the ICERs decreased by about 0-2%.

How did you arrive at that topic? When did you know that this is what you wanted to study?

I have long been interested in pricing and value assessment of medical technologies. In the short proposal phase, I brainstormed with Lou Garrison (my dissertation committee chair) and explored several topics in that area. Around that time, ISPOR also started to form a special task force to study US value assessment frameworks and Lou was the chair of the task force. As a result, a lot of our discussions back then were around the economic foundations of cost-effectiveness analysis, and benefits of a technology that are usually not accounted for in a conventional cost-effectiveness model. Real option value is one of those benefits, and it is intuitive conceptually, and relatively easy to operationalize.

What was your daily schedule like when you were working on your dissertation?

I did not have a fixed schedule when I was working on my dissertation. I usually had several research projects going on at the same time, and I tried to balance doing side projects with working on my dissertation. However, instead of spending a couple hours a day, I liked to have a few uninterrupted days when I can immerse myself in my dissertation research.

In what quarter did you submit your final short proposal and in what quarter did you graduate/defend? What were some factors that determined your dissertation timeline?

I started brainstorming ideas for dissertation in the summer of 2015, after my second year, and submitted my short proposal in March 2016, defended my dissertation proposal in February 2017, and defended my dissertation in May 2018. I would say my dissertation had a relatively slow start, as there was little existing research on the particular topic that I was studying. A lot of time was spent on thinking through the theory and developing the research plan. I probably could have finished sooner, but I decided to follow the school cycle and graduate in the Spring. I also wanted to be well prepared for my general and final exams and submit my dissertation papers to journals before I graduate.

How did you fund your dissertation?

My dissertation was not funded by any particular grant. However, I was one of the TAs for the Certificate Program in Health Economics & Outcomes Research in the Department of Pharmacy, which supported me financially during my last year in graduate school. Before that, I was the research assistant on various research projects, which provided me with financial support in the first four years of my graduate school.

What will come next for you (or has come next for you)? What have you learned about finding work after school?

I will be joining the Schaeffer Center for Health Economics & Policy at the University of Southern California as a postdoctoral scholar. My search for jobs in the past few months was mostly focused on postdoc positions. I searched job boards (Indeed and LinkedIn), career pages of professional societies (ISPOR and AcademyHealth), and websites of the groups that I am interested in working with. My advisors and colleagues also connected me with groups where there might potentially be opportunities.