A Brief Introduction to Infectious Disease Modelling: Simulating a Hypothetical COVID-19 Outbreak

By Enrique M. Saldarriaga

Introduction

The COVID-19 pandemic has boosted the interest for mathematical models of infectious diseases. In this entry, I will briefly introduce some of these models and provide an R-code to simulate an outbreak of COVID-19.

These models synthesize multiple sources of information into equations that aim to model the evolution of a disease and make predictions. When used correctly, they can be incredibly powerful tools to explain a very chaotic and complex reality, to evaluate policy options to inform decision-making, to understand hidden mechanisms that drive an epidemic, and others.

Infectious diseases do not occur in isolation in each person.  They are transmitted through contact with a pathogen. Thus, there is a need to understand the mechanisms for a susceptible person to establish effective contact (i.e. contact that results in a transmission; sexually transmitted disease is a good example) with someone who is infected with that pathogen. On the population level, disease prevalence is considered a risk factor for the incidence: the higher the proportion of people living with a disease, the higher the likelihood that an infected person gets in contact with a susceptible person. This relationship between incidence and prevalence can be characterized using dynamic models. Here, the probability of getting infected is determined by the probability of contact with an infectious person (or animal in case of diseases transmitted by vectors, like malaria), which is given by the prevalence. A contact resulting in an infection is called a susceptible-infected effective contact.

Infectious and non-communicable disease models have substantial similarities: both can be compartmental or agent-based (microsimulation), as well as deterministic (static transition probabilities) or stochastic (transition probabilities are random draws of a specified distribution). In any case, the decision about which model to use is determined by the scope, purpose of the analysis, and many times, the target audience for results dissemination.

In the following section I will describe compartmental, deterministic, closed-cohort models. In a closed cohort model, we assume no deaths or births, but the population remains constant over time.

Model Types

The Susceptible-Infectious (SI) Model. This is the most basic infectious disease model. It is characterized by two state variables or compartments: Susceptible (S) and Infectious (I). Here we model one transition, and once all susceptibles are infected, the epidemic is over (no deaths in this model). The transition is driven by the transmission coefficient. This is a very important concept because regardless of model type, this parameter determines the rate at which people get infected. It is usually denoted by lambda, λ, and it is the product of the infectivity or probability of transmission per contact (ρ), the contact rate at a given period (c), and the prevalence of infected (I/N; where N  is the total population): λ = c * ρ * I/N. At any point in time, and for all model options, the number susceptible decreases by λ.

The Susceptible-Infectious-Recover (SIR) Model. In addition to susceptible and infected, the SIR model includes the recovered (R) compartment. R includes people that were infected and overcome the disease. The rate of transition is given by the inverse of disease duration, also known as the recovery rate (γ). Some diseases confer immunity (e.g. measles) after infection, but others do not. To capture this, a SIRS (susceptible-infected-recovered-susceptible) model would be more appropriate and allows those who don’t develop immunity to transition back to susceptible.

The Susceptible-Exposed-Infectious-Recovered, (SEIR) Model. This model adds an exposed (E) compartment. Exposed are all persons who have been infected but are not yet symptomatic, and more importantly, not yet infectious. Infectious persons are the only ones capable of spreading the disease, hence, an accurate count of them is very important. When using a SEIR model, the transition between S and E is given by lambda (λ) and the transition between E and I is given by the inverse of the latency or incubation period (σ).

seri

COVID-19 Outbreak Example

I am going to simulate a COVID-19 outbreak using a SEIR model, depicted in the figure below. All parameters have been obtained from the MIDAS Network repository – an excellent and publicly available compilation of COVID-19 parameters.

Let’s model the transitions between compartments considering 1-timepoint increment:

seir2
By taking the partial derivative of these equations with respect to t, we obtain the changes in every compartment at any point in time:

f2

With this in mind, let’s go to the R-code to see how to implement the simulation.

COVID-19 Example Results

We model an outbreak for 1 year, using the following parameters: c * ρ = 1.5, σ = 1/4.2, and γ = 1/20, for a population of 1 million where 1 persons were already infected. The following image describes the outbreak.

p1

We can see a very steep increase in the number of infected, which peaks at 625,095 infections on the 37th day of the outbreak. As it is often pointed out, this rapid increase in cases can overload health systems, reducing the possibility of many people to access care.

How can we flatten the curve? One intervention to contain the COVID-19 pandemic was to increase the physical distance between people. The objective was to reduce the probability of an effective susceptible-infected contact. In modelling terms, this would directly reduce c * ρ  and therefore λ.

The following image shows the results of reducing c * ρ  to 0.6 instead of 1.5.

p2

The peak of infection occurs later, on day 65, at a lower count as well: 550,446. This is an example of how effective behavioral changes can be to reduce the severity of an outbreak.

In this example we changed only one parameter. But one thing that amazes me about infectious disease modelling, is that (almost) every parameter driving the outbreak is susceptible to change given the right intervention. You can now use the R-code to see how variations in other parameters affect the outbreak and think about what kinds of interventions might produce such changes.

Suggested Readings

Vynnycky, E. & White, R. G. An introduction to infectious disease modelling. (Oxford University Press, 2010). BookSite

Garnett, G. An introduction to mathematical models in sexually transmitted disease epidemiology. Sex Transm Infect 78, 7–12 (2002).

Kretzschmar, M. Disease modeling for public health: added value, challenges, and institutional constraints. J Public Health Pol 41, 39–51 (2020).

Dr. Mark Bounthavong’s Talk on Formulating Good Research Questions

by Enrique M. Saldarriaga and Jacinda Tran

On April 16, 2020, the ISPOR Student Chapter at the University of Washington hosted a webinar on how to formulate good research questions featuring Dr. Mark Bounthavong, PhD, PharmD, MPH. He discussed aspects of compelling research questions, shared his formulating process, presented best practices, and provided recommendations for students at all stages of their career.

Dr. Bounthavong is a graduate of the UW CHOICE Institute and a prolific researcher with several years of experience in HEOR. He currently serves as a health economist at the VA Health Economics Resource Center and a Research Affiliate at Stanford University, and his research interests include pharmacoeconomics, outcomes research, health economics, process and program evaluations, econometric methods, and evidence synthesis using Bayesian methods.

Our UW Student Chapter thanks Dr. Bounthavong for his insightful presentation and hopes our fellow researchers find this recording of his presentation to be a helpful resource.

Note: Dr. Bounthavong has authorized the publication of his talk in this post.

The Utility Function, Indifference Curves, and Healthcare

By Brennan T. Beal

Impetus For The Post

When I first learned about utility functions and their associated indifference curves, I was shown an intimidating figure that looked a bit like the image below. If you were lucky, you were shown a computer generated image. The less fortunate had a professor furiously scribbling them onto a board.

https://opentextbc.ca/principlesofeconomics/back-matter/appendix-b-indifference-curves/

A few things were immediately of concern: why are there multiple indifference curves for one function if it only represents one consumer? Why are the curves moving? And… who is Natasha? So, while answering my own questions, I thought sharing the knowledge would be helpful. This post will hopefully provide a better description than maybe most of us have heard and by the end you will understand:

  1. What indifference curves are and what they represent
  2. How a budget constraint relates to these indifference curves and the overall utility function
  3. How to optimize utility within these constraints (if you’re brave)

For the scope of this post, I’ll assume you have some fundamental understanding of utility theory.

Click here to link to my original post to continue reading.

Expected Loss Curves

By Brennan T. Beal, PharmD

The Second Panel on Cost-Effectiveness in Health and Medicine recommends model uncertainty be reflected by displaying the cost-effectiveness acceptability curve (CEAC) with the cost-effectiveness acceptability frontier (CEAF) overlaid (more on this can be seen here). However, on top of being relatively difficult to interpret, these graphical representations may miss a crucial part of decision-making: risk.

A risk-neutral approach to decision-making would mean choosing a strategy that is most likely to be cost-effective despite what one stands to lose economically when the strategy is not cost-effective. Though, we know that decision-makers are often not risk-neutral. With this in mind, selecting a strategy based solely on the probability of being cost-effective could expose a decision-maker to unnecessary risks. It is not always the case that the most likely to be cost-effective is truly the optimal decision; notably, the optimal decision should be thought of as the strategy with the lowest expected loss.

Consider the following example:

Let us suppose that you want to compare two strategies (Strategy A and Strategy B) to see which will be optimal for your company. Your head statistician informs you that Strategy A will be cost-effective 70% of the time and in the 70 times out of 100 that it is cost-effective, you stand to gain $5 dollars each time (i.e., you lose $0 each of those 70 times). She then proceeds to tell you that for every time you are wrong (30% of the time) you stand to lose $100. Your expected loss would be $30 (30% of the time losing $100). With that in mind, you also calculate the expected loss for Strategy B. Turns out it is only $7! ($7 is arbitrary for the sake of example).

In this example, Strategy B would be favored on the CEAF given that it has the lowest expected loss but the CEAC would have shown it to be less likely. So, having the CEAF at least informs us what strategy is optimal, but we are still left with a relatively confusing picture of cost-effectiveness.

Below are three hypothetical distributions of the incremental net benefit (INB) of Strategy B when compared to Strategy A. Simply stated, the INB curves can be thought of as the probabilities of monetary outcomes when comparing strategies.

brennan figure 1

This example is described in greater detail in my recent blog entry on the topic of expected loss. For each distribution above, Drug B is considered optimal as it has the lowest expected loss in each scenario. However, in situations where the mean and median have opposite signs (such as in the case of the right skewed blue curve above, mean INB of $90 vs. a median of -$75), only considering the most likely to be cost-effective will not provide a decision-maker with the optimal decision. For the blue curve above, Drug B has a lesser chance of being cost-effective (46%), but an expected loss of $271 vs. $361 for Drug A.

Expected loss curves (ECLs) account for the probability that a strategy is not cost-effective and how drastic the consequences are in those scenarios. The ELC represents the optimal strategy at each willingness-to-pay threshold and provides a much clearer picture of risk for more informed decision-making.

In my full blog entry, I cover:

  1. An in-depth explanation of ELCs;
  2. A working example of the mathematics and associated R code;
  3. and an interactive example at the end so you can see for yourself

 

 

 

Are private insurers in the United States using cost-effectiveness analysis evidence to set their formularies?

theme pic premera

By Elizabeth Brouwer

As prices rise for many prescription drugs in the United States (US), stakeholders have made efforts to curb the cost of medications with varying degrees of success. One option put forth to contain drug spending is to connect drug coverage and cost-sharing to value, with cost-effectiveness analysis being one of the primary measures of drug value.

In 2010, a payer in the Pacific Northwest implemented a formulary where cost-sharing for prescription drugs was driven by cost-effectiveness evidence. This value-based formulary (VBF) had 5 tiers based on cost-effectiveness ranges determining a patient’s copay amount, aka their level of cost-sharing (Table 1). There was allotment for special cases where a drug had no alternatives or treated a sensitive population, however a majority of the drugs fell within each of these categories. Later analysis found that this VBF resulted in a net (including both payer and patient) decrease in medication expenditures of $8 per member per month, with no change in total medication or health services utilization. A 2018 literature review found slightly different (but still optimistic) results, that value-based formulary design programs increased medication adherence without increasing health spending.

TABLE 1. FORMULARY DESIGNTable 1

Given the potential benefits of implementing value-based cost-sharing for prescription drugs, we wanted to know if other private payers in the US were using cost-effectiveness value evidence to set their drug formularies. If private payers were “moving toward value,” we would expect to see cost-sharing for high-value drugs getting cheaper relative to cost-sharing for low-value drugs (Figure 1).

FIGURE 1. OUR HYPOTHESISFigure 1

To test this theory, we used claims data from a large portion of Americans with private, employee-sponsored health insurance to find the average out-of-pocket cost for each prescription drug in each year from 2010-2013. The collapsed claims data were then linked to the value designation (or “tier”) for each drug. We used a random effects model to see how out-of-pocket costs changed each year in each cost-effectiveness category. (For more details on our methods, please check out our paper, which was recently published in PharmacoEconomics journal).

The results revealed a few interesting trends.

Cost-sharing for prescription drugs was trending toward value in those years, but in a very specific way. Average cost-sharing across all “tiers” decreased over the time frame, and drugs with cost-effectiveness ratios below $10,000 per quality-adjusted life-year (QALY) were getting cheaper at a faster rate than those with cost-effectiveness ratios above that threshold. But there was no distinction in cost-sharing for drugs within those two groups, even accounting for generic status.

Additionally, the movement toward value that we saw was largely the result of increased use of generic drugs, rather than an increased use of more cost-effective drugs. Splitting the data by generic status showed that we are not using higher value drugs within generic and brand name categories (Figure 2).

FIGURE 2. AVERAGE ICERs OVER TIME

Figure 2Source

Our results indicate that there is probably space in private drug formularies to further encourage the use of higher value drugs options and, conversely, to further discourage use of lower-value drug options. This is particularly true for drugs with ICERs in the range of $10,000-$150,000 per QALY and above, where payers are largely ignoring differences in value.

One limitation of the analysis was that it was restricted to years 2010-2013. Whether private payers in the US have increased their use of value information since the implementation of the Affordable Care Act in 2014, or in response to continually rising drug prices, is an important question for further research.

In conclusion, there is evidence indicating payers have an opportunity to implement new (or expand existing) VBF programs. These programs have the potential to protect patient access to effective medical treatments while addressing issues with affordability in the US health care system.