A target for harm reduction in HIV: reduced illicit drug use is associated with increased viral suppression.

By Lauren Strand

In the midst of a fatal drug epidemic and shifting drug policy in the United States, there is continued interest in the relationship between illicit drug use and negative health outcomes. Because substance use is difficult to characterize in individuals, studies often target sub-populations with more substantial and better-documented substance use profiles. One example is people living with HIV, in whom substance use has been associated with poor engagement in the HIV care continuum, lower likelihood of receiving antiretroviral therapy, reduced adherence to therapy, and increased disease-related mortality. Recently I collaborated on a study finding that reduction in frequency of illicit opioid and methamphetamine use is associated with viral suppression among people living with HIV. Since viral suppression is an important consideration for individual’s health as well as disease transmission, this finding has important policy implications for harm-reduction around substance use frequency.

This work was spearheaded by Robin Nance and Dr. Maria Esther Perez Trejo and advised by Dr. Heidi Crane and Dr. Chris Delaney, colleagues and mentors of mine during my time at the Collaborative Health Studies Coordinating Center (University of Washington). The publication in Clinical Infectious Diseases focuses on the longitudinal relationship between reducing illicit drug use frequency and a key biomarker in HIV, viral load (VL), among people living with HIV. This study used longitudinal data from the Centers for AIDS Research Network of Integrated Clinical Sites (CNICS) cohort. CNICS is an ongoing observational study consisting of more than 35,000 people living with HIV receiving primary care at one of eight sites (Seattle, San Francisco, San Diego, Cleveland, Chapel Hill, Birmingham, Baltimore, and Boston). Importantly, CNICS provides peer-reviewed open access to patient data including clinical outcomes, biological data, and patient-reported outcomes. This study also used individual data in four studies from the Criminal Justice Seek, Test, Treat, and Retain (STTR) collaboration. STTR is an effort to combines data from involved observational studies and trials to improve outcomes along the HIV care continuum for people involved in some part of the criminal justice system. One example is individuals recently released from jail who have struggled in the past with substance use disorders.

Within CNICS, substance use was collected at clinical assessment via tablets approximately every six months with instruments including the modified Alcohol, Smoking, and Substance Involvement Screening Test and the Alcohol Use Disorders Identification Test. Drug use was defined as frequency of use in the last 30 days and was further categorized according to longitudinal trends from baseline: abstinence (no use at baseline or follow-up), reduction in use without abstinence (use at baseline that has declined at follow-up), and non-decreasing (similar or increased use). Drug categories were marijuana, cocaine/crack, methamphetamine, and heroin/other illicit opioids.  Viral suppression was defined as an undetectable VL (<=400 copies/mL). Analytic models for each individual drug were joint longitudinal and survival models with time-varying substance use and adjustment for demographics, follow-up time, cohort entry year, and other concomitant drugs including alcohol and binge alcohol. These longitudinal models account for repeated measures and differential loss to follow-up (unbalanced panels).

Analyses (mean follow-up of 3.9 years) included approximately 12,000 people living with HIV with a mean age if 44 and of whom 47% were white. Marijuana was widely used at baseline, though methamphetamine was also common. Relative to non-decreasing use, abstinence was associated with an increase in odds of viral suppression ranging from 42% for marijuana to 118% for opioids (all four substance groups statistically significant). Reduction in use was associated with an increase of 65% for methamphetamine and 172% for opioids. The directionality and statistical significance of these results were maintained in sensitivity analyses with pooled fixed effects meta-analysis using both CNICS and STTR studies.

Ultimately, findings from this large-sample longitudinal analysis suggest that abstinence of all drug groups increases the likelihood of viral suppression and, more interestingly, reducing frequency without abstinence may also increase the likelihood of viral suppression for illicit opioids and methamphetamine. This finding may support the use of medication-assisted treatments (MAT) to reduce substance use, which could have the potential to improve disease-related outcomes for people living with HIV. However, this study did not evaluate why individuals may have increased or decreased use of illicit substances (e.g. MAT, or other treatment programs). In any case, reduction of illicit substance like opioids and meth (even when abstinence is not achieved) seems like a logical target for harm reduction interventions in people living with HIV and likely, in the broader population, to improve overall health outcomes.

One extension of this work would be to evaluate the relative value of programs targeting abstinence and substance use reduction among individuals with HIV compared with other programs. This, of course, requires a true causal relationship between substance use and viral load, which is likely mediated through ART adherence. A simple Markov model could include states for suppressed and not suppressed; however, because suppression reduces the risk of transmission, we might also incorporate shifting dynamics of the population of people living with HIV. Both transmission and individual outcomes were considered in a recent cost-effectiveness analysis of financial incentives for viral suppression authored by CHOICE Alumna Dr. Blythe Adamson and Professors, Dr. Josh Carlson and Dr. Lou Garrison. The main study finding was that paying individuals to take HIV medications was associated with health improvement, reduced transmission, and reduced healthcare costs. While this finding is fascinating, substance use may be an important contextual consideration. One previous study found that financial incentives did not improve viral suppression among substance users and it is unclear how financial incentives may impact drug use and addiction. This is an active area of research and debate. Our study did not look at increases in substance use and viral suppression because we wanted to address the question around reduction and abstinence. Regardless, additional research on strategies to improve viral suppression are needed as well as a better understanding of the interplay between substance use behavior, other risk behaviors, adherence, and viral suppression among people living with HIV.

Economic evaluation of New Rural Cooperative Medical Scheme in China

By Boshen Jiao

In China, while the private health insurance is growing rapidly, the government-funded basic health insurance still dominates the health care landscape. Chinese government defines three types of beneficiaries: urban employees, urban residents, and rural residents. Accordingly, three main types of healthcare coverage plans were implemented in China: the Urban Employee Basic Medical Insurance, the Urban Resident Basic Medical Insurance, and the New Rural Cooperative Medical Scheme (NCMS).

The NCMS, which was initiated in 2003 and financed by both governments and individuals, was specifically designed for rural residents in China. In some sense, the Chinese government can feel proud since 98% of the rural residents are covered and this, undoubtedly, has been viewed as a great success. In particular, many of the newly covered individuals are considered to be poor and underserved, with a long history of struggling for access to basic health care.

However, the health and economic consequence of the NCMS might not be that pleasing. While the effectiveness for mortality reduction remained controversial based on current scientific evidence, the NCMS resulted in a 61% increase in out-of-pocket spending. Given the fact that the NCMS has finite resources and impacts a large number of lives, it was critical to do a “thought experiment” and assess the cost-effectiveness of the NCMS. This is the subject of a paper I recently published with Dr. Jinjing Wu from the Asian Demographic Research Institute at Shanghai University and several coauthors from the Columbia Mailman School of Public Health. This paper, titled “The cost-effectiveness analysis of the New Rural Cooperative Medical Scheme in China,” was recently published in PloS One.

Initial estimates of NCMS’s effect on mortality were based on quasi-experimental studies that produced conflicting results. Some argued that NCMS significantly decreased the death rate among the elderly in the eastern region, while the other study using a nationally representative sample concluded to have no statistically significant effect. Although it was tempting to embrace the favorable results, our investigators decided to take on the less-favorable study. We made this call mainly because the nationally representative sample was derived from the Disease Surveillance Point system which was widely accepted as a very reliable data source. Besides, we hoped to draw from the whole country, rather than only focusing on East China where more economic resources and better healthcare are offered. In addition to the effect on mortality rate, the NCMS had proved to successfully lower the risk of hypertension, which was also included as an effectiveness parameter in our model.

Because of uncertainty around its effect on rural residents’ survival, it is likely that the NCMS is not cost-effective. Based on our analysis, the NCMS can only buy one more QALY for rural residents at the social price of 71,480 international (Int) dollars (Note: the costs and economic benefits were converted into 2013 Int dollars using purchasing power parity exchange rate reported by the World Bank). This is not optimal for China. If we believe that three times per capita GDP can be a fair willingness-to-pay threshold (Int$845,659), the NCMS had only a 33% chance to be cost-effective. The results were not surprising, however, nonetheless disappointing. One possibility that we did not explore is that the elderly benefit the most from NCMS. Using a nationally representative sample, however, the NCMS is plausibly costly for the society and failed to produce sufficient health benefits. We discussed the reasons why the NCMS appears to be inefficient. Current literature described the NCMS as providing catastrophic coverage that mostly covers inpatient services. People may barely use the preventative care or other necessary outpatient services, which would plausibly lead to severe illness and costly complications in the future. Moreover, the NCMS is associated with high copayments, which restricts low-income rural residents’ access to health care and fails to reduce out-of-pocket expenses. We concluded that, while the Chinese government indeed achieved a great success in coverage expansion, the program’s efficiency should be a consideration for future improvements. In order to achieve this goal, cost-effectiveness analysis could be a useful tool when designing the plan. Our study presented an overall picture of the cost-effectiveness of the NCMS, in which the effect was estimated based on an aggregated of the data collected from different regions. However, the heterogeneity across the regions, particularly at the county level, would need to be taken into account for the future study. This is because the county governments play a critical role in financing for the NCMS, and their budget constraint for the plan has a fundamental effect on the design and implementation of it. As a consequence, the health outcome of NCMS may vary dramatically across the counties. Our analysis would have been enriched and would have provided more informative policy implications if the county level data can be obtained. Updated estimates of cost-effectiveness for plaque psoriasis treatments Along with co-authors from ICER and The CHOICE Institute, I recently published a paper in JMCP titled, “Cost-effectiveness of targeted pharmacotherapy for moderate-to-severe plaque psoriasis.” In this publication, we sought to update estimates of cost-effectiveness for systemic therapies useful in the population of patients with psoriasis for whom methotrexate and phototherapy are not enough. Starting in 1998, a class of drugs acting on Tumor Necrosis Factor alpha (TNFɑ) has been the mainstay of psoriasis treatment in this population. The drugs in this class, including adalimumab, etanercept, and infliximab, are still widely used due to their long history of safety and lower cost than some competitors. They are less effective than many new treatments, however, particularly drugs inhibiting interleukin-17 such as brodalumab, ixekizumab, and secukinumab. This presents a significant challenge to decision-makers: is it better to initiate targeted treatment with a less effective, less costly option, or a more effective, costlier one? We found that the answer to this question is complicated by several current gaps in knowledge. First, there is some evidence that prior exposure to biologic drugs is associated with lower effectiveness in subsequent biologics. This means that the selection of a first targeted treatment must balance cost considerations with the possibility of losing effectiveness in subsequent targeted treatments if the first is not effective. A related issue is that the duration of effectiveness (or “drug survival”) for each of these drugs is currently poorly characterized in the US context. Drug discontinuation and switching is significantly impacted by policy considerations such as requirements for step therapy and restrictions on dose escalation. Therefore, while there is a reasonable amount of research about drug survival in Europe, it is not clear how well this information translates to the US. Another difficulty of performing cost-effectiveness research in this disease area is the difficulty of mapping utility weights onto trial outcomes. Every drug considered in our analysis used percentage change in the Psoriasis Area Severity Index (PASI) over baseline. Because this is not an absolute measure, it required that we assume that patients have comparable baseline PASI scores between studies. In other words, we had to assume that a given percent improvement in PASI was equivalent to a given increase in health-related quality of life. This means that if one study’s population had less severe psoriasis at baseline, we probably overstated the utility benefit of that drug. In light of these gaps in knowledge, our analytic strategy was to model a simulated cohort of patients with incident use of targeted drugs. After taking a first targeted drug, they could be switched to a second targeted drug or cease targeted therapy. We made the decision to limit patients to two lines of targeted treatment in order to keep the paper focused on the issue of initial treatment. What we found is a nuanced picture of cost-effectiveness in this disease area. In agreement with older cost-effectiveness studies, we found that infliximab is the most cost-effective TNFɑ drug and, along with the PDE-4 inhibitor apremilast, is likely to be the most cost-effective treatment at lower willingness-to-pay (WTP) thresholds. However, at higher WTP thresholds of$150,000 per quality-adjusted life year and above, we found that the IL-17 inhibitors brodalumab and secukinumab become more likely to be the most cost-effective.

The ambiguity of these results suggests both the importance of closing the gaps in knowledge mentioned above and of considering factors beyond cost-effectiveness in coverage decisions. For example, apremilast is the only oral drug we considered and patients may be willing to trade lower effectiveness to avoid injections. Another consideration is that IL-17 inhibitors are contraindicated for patients with inflammatory bowel disease, suggesting that payers should make a variety of drug classes accessible in order to provide for all patients.

In summary, these results should be seen as provisional, not only because many important parameters are still uncertain, but also because several new drugs and biosimilars for plaque psoriasis are nearing release. Decision-makers will need to keep an eye on emerging evidence in order to make rational decisions about this costly and impactful class of drugs.

Economic Evaluation Methods Part I: Interpreting Cost-Effectiveness Acceptability Curves and Estimating Costs

One of the main training activities at the CHOICE Institute at the University of Washington is to instruct graduate students how to perform economic evaluations of medical technologies. In this blog post series, we give a brief overview of two important economic evaluation concepts. Each one of the concepts are mutually exclusive and are meant to stand alone. The first of this two-part series describes how to interpret a cost-effectiveness acceptability curve (CEAC) and then delves into ways of costing a health intervention. The second part of the series will describe two additional concepts: how to develop and interpret cost-effectiveness frontiers and how multi-criteria decision analysis (MCDA) can be used in Health Technology Assessment (HTA).

Cost-Effectiveness Acceptability Curve (CEAC)

The CEAC is a way to graphically present decision uncertainty around the expected incremental cost-effectiveness of healthcare technologies. A CEAC is created using the results of a probabilistic analysis(PA).[1] PA involves simultaneously drawing a set of input parameter values by randomly sampling from each parameter distribution, and then storing the model results.  This is repeated many times (typically 1,000 to 10,000), resulting in a distribution of outputs that can be graphed on the cost-effectiveness plane. The CEAC reflects the proportion of results that are considered ‘favorable’ (i.e. cost effective) in relation to a given cost-effectiveness threshold.

The primary goal of a CEAC graph is to inform coverage decisions among payers that are considering a new technology, comparing one or more established technologies that may include the standard of care. The CEAC enables a payer to determine, over a range of willingness to pay (WTP) thresholds, the probability that a medical technology is considered cost-effective in comparison to its appropriate comparator (e.g. usual care), given the information available at the time of the analysis. A WTP threshold is generally expressed in terms of societal willingness to pay for an additional life year or quality-adjusted life year (QALY) gained. In the US, WTP thresholds typically range between $50,000 –$150,000 per QALY.

The X-axis of a CEAC represents the range of WTP thresholds. The Y-axis represents the probability of each comparator being cost-effective at a given WTP threshold, and ranges between 0% and 100%. Thus, it simply reflects the proportion of simulated ICERs from the PA that fall below the corresponding thresholds on the X-axis.

Figure 1. The Cost-Effectiveness Acceptability Curve

Coyle, Doug, et al. “Cost-effectiveness of new oral anticoagulants compared with warfarin in preventing stroke and other cardiovascular events in patients with atrial fibrillation.” Value in health 16.4 (2013): 498-506.

Figure 1 shows CEACs for five different drugs, making it easy for the reader to see that at the lower end of the WTP threshold range (i.e. $0 –$20,000 per QALY), warfarin has the highest probability to be cost-effective (or in this case “optimal”). At WTP values >$20,000 per QALY, dabigatran has the highest probability to be cost-effective. All the other drugs have a lower probability of being cost-effective compared to warfarin and dabigatran at every WTP threshold. The cost-effectiveness acceptability frontier in Figure 1 follows along the top of all the curves and shows directly which of the five technologies has the highest probability of being cost-effective at various levels of the WTP thresholds. To the extent that the unit price of the technology influences the decision uncertainty, a CEAC can offer insights to payers as well as manufacturers as they consider a value-based price. For example, a lower unit price for the drug may lower the ICER and, all else equal, this increases the probability that the new technology is considered cost-effective at a given WTP threshold. Note, that when new technologies are priced such that the ICER falls just below the WTP for a QALY, (e.g. an ICER of$99,999 when the WTP is $100,000) the decision uncertainty tends to be substantial, often around 50%. If decision uncertainty is perceived to be ‘unacceptably high’, it can be recommended to collect further information to reduce decision uncertainty. Depending on the drivers of decision uncertainty, for example in case of stochastic uncertainty in the efficacy parameters, performance-based risk agreements (PBRAs) or managed entry schemes may be appropriate tools to manage the risk. Cost estimates The numerator of most economic evaluations for health is the cost of a technology or intervention. There are several ways to arrive at that cost, and choice of method depends on the context of the intervention and the available data. Two broadly categorized methods for costing are the bottom-up methodand the top-down method. These methods, described below, are not mutually exclusive and may complement each other, although they often do not produce the same results. The bottom-up method is also known as the ingredients approach or micro-costing. In this method, the analyst identifies all the items necessary to complete an intervention, such as medical supplies and clinician time, and adds them up to estimate the total cost. The main categories to consider when calculating costs via the bottom-up method are medical costs and non-medical costs. Medical costs can be direct, such as the supplies used to perform a surgery, or indirect, such as the food and bed used for inpatient care. Non-medical costs often include costs to the patient, such as transportation to the clinic or caregiver costs. The categories used when estimating the total cost of an intervention will depend on the perspective the analyst takes (perspectives include patient, health system, or societal). The bottom-up approach can be completed prospectively or retrospectively, and can be helpful for planning and budgeting. Because the method identifies and values each input, it allows for a clear breakdown as to where dollars are being spent. To be accurate, however, one must be able to identify all the necessary inputs for an intervention and know how to value capital inputs like MRI machines or hospital buildings. The calculations may also become unwieldy on a very large scale. The bottom-up approach is often used in global health research, where medical programs or governmental agencies supply specific items to implement an intervention, or in simple interventions where there are only a few necessary ingredients. The top-down estimation approach takes the total cost of a project and divides it by the number of service units generated. In some cases, this is completed simply looking at the budget for a program or an intervention and then dividing that total by the number of patients. The top-down approach is useful because it is a simple, intuitive measurement that captures the actual amount of money spent on a project and the number of units produced, particularly for large projects or organizations. Compared to the bottom-up approach, the top-down approach can be much faster and cheaper. The top-down approach can only be used retrospectively, however, and may not allow for the breakdown of how the money was spent or be able to identify variations between patients. While the final choice will depend on several factors, it makes sense to try and think through (or model) which of the cost inputs are likely to be most impactful on the model results. For example, the costs of lab tests may most accurately be estimated by a bottom-up costing approach. However, if these lab costs are likely to be a fraction of the cost of treatment, say a million dollar cure for cancer, then going through the motions of a bottom-up approach may not be the most efficient way to get your PhD-project done in time. In other cases, however, a bottom-up approach may provide crucial insights that move the needle on the estimated cost-effectiveness of medical technologies, particularly in settings where a lack of existing datasets is limiting the potential of cost-effectiveness studies to inform decisions on the allocation of scarce healthcare resources. [1]Fenwick, Elisabeth, Bernie J. O’Brien, and Andrew Briggs. “Cost‐effectiveness acceptability curves–facts, fallacies and frequently asked questions.” Health economics 13.5 (2004): 405-415. Estimating elasticities from linear regressions By Enrique Saldarriaga This post aims to show how elasticities can be estimated directly from linear regressions. Elasticity measures the association between two numeric variables expressed in percentages, whose interpretation is the percentage increase in one variable associated with a 1% change in another one. Elasticities have served economics for a long time, basically because they allow comparison between very different settings when changes in levels (e.g. dollars) are difficult to interpret. Take for instance a company that produces both cars and chocolate bars. If they want to know how changes in the price of both products would impact their demand, in order to determine which price to increase and which to maintain, the comparison would be a mess. A$100 change would mean nothing for the demand of cars but would destroy demand for chocolates. Similarly, a decrease in 100K people in the chocolate bars’ demand is probably just a dent, but the same amount could represent a significant portion of the car’s market. Changes in percentages are a way to standardize change and its consequences. In health economics, elasticities are becoming more common because they are able to show fair comparisons between variables with heterogenous behavior depending on the context.

The most common elasticities are price and income. The income elasticity of any variable would be the percentage change in that variable associated with a 1% change in income; the elasticity price would be the percentage change in a variable associated with a 1% change in its own price. In economics, that variable usually would be the demand of a given good. In health economics, that variable could be any number of things.

For example, let’s say we want to estimate the income elasticity of BMI. The elasticity would be expressed as:

$\displaystyle \epsilon_I = \dfrac{\Delta \% BMI}{\Delta \% Income} = \dfrac{\Delta BMI}{\Delta Income}* \dfrac{Income}{BMI}$

where ∆ stands for variation and shows the variation of BMI or Income from one point to another in their joint distribution. In order to make this change infinitesimal, and therefore estimate the elasticity in the whole distribution, we can express those changes with differentials: ε_I = dQ/dP*Income/BMI

Now, to obtain elasticities directly from linear regressions we should use the logarithmic forms of the variables:

$\displaystyle ln(BMI) = \beta_0 + \beta_1 * ln(Income) = f(Income)$

The BMI is a function of income. The function can be expressed as:

$\displaystyle f(Income) = BMI = e^{ \beta_0 + \beta_1 * ln(Income)}$

To find the change in BMI associated with a change in income we should derive this function by Income. Given the form of the function we use the chain rule:

$\displaystyle f(x)=g(h(x)) ; f'(x)=g'(h(x))*h'(x)$

Where: $\displaystyle g(h) = e^h , h(Income) = \beta_0 + \beta_1 *ln(Income)$

$\displaystyle g'(h)= e^h ,h'(Income) = \beta_1 * \dfrac{1}{Income}$

Then:

$\displaystyle f'(Income) = e^{ \beta_0 + \beta_1 * ln(Income)} * \beta_1 * \dfrac{1}{Income}$

$\displaystyle \dfrac{\partial BMI}{\partial Income} = e^{ln(BMI)} * \beta_1 * \dfrac{1}{Income} = BMI * \beta_1 * \dfrac{1}{Income}$

$\displaystyle \beta_1 = \dfrac{ \partial BMI}{ \partial Income} * \dfrac{Income}{BMI}$

Thus, the β_1 coefficient is the estimation of the income elasticity. The same procedure could be applied to find the price elasticity. It’s interesting to notice that if only the independent variable were in its logarithmic form, the derivation would be easier, and the coefficient would be:

$\beta_1 = \dfrac{ \partial BMI}{ \partial Income} * \dfrac{Income}{1}$

Then, to estimate the elasticity it would be necessary to divide the coefficient by a measure of BMI, probably the mean to account for the whole distribution, as Sisira et al. did in their paper. However, in this case would be necessary to carefully select a good average measure. This method would be useful if we prefer to interpret other covariates’ coefficients using BMI rather than ln⁡(BMI).

Commonly Misunderstood Concepts in Pharmacoepidemiology

By Erik J. Landaas, MPH, PhD Student and Naomi Schwartz, MPH, PhD Student

Epidemiologic methods are central to the academic and research endeavors at the CHOICE institute. The field of epidemiology fosters the critical thinking required for high quality medical research. Pharmacoepidemiology is a sub-field of epidemiology and has been around since the 1970’s. One of the driving forces behind the establishment of pharmacoepidemiology was the Thalidomide disaster. In response to this tragedy, laws were enacted that gave the FDA authority to evaluate the efficacy of drugs. In addition, drug manufacturers were required to conduct clinical trials to provide evidence of a drug’s efficacy. This spawned a new and important body of work surrounding drug safety, efficacy, and post-marketing surveillance.[i]

In this article, we break down three of the more complex and often misunderstood concepts in pharmacoepidemiology: immortal time bias, protopathic bias, and drug exposure definition and measurement.

Immortal Time Bias

In pharmacoepidemiology studies, immortal time bias typically arises when the determination of an individual’s treatment status involves a delay or waiting period during which follow-up time is accrued. Immortal time bias is a period of follow-up during which, by design, the outcome of interest cannot occur. For example, the finding that Oscar winners live longer than non-winnersis a result of immortal time bias. In order for an individual to win an Oscar, he/she must live long enough to receive the award.  A pharmacoepidemiology example of this is depicted in Figure 1. A patient who receives a prescription may survive longer because he/she must live long enough to receive a prescription while a patient who does not receive a prescription has no survival requirements.  The most common way to avoid immortal time bias is to use a time-varying exposure variable. This allows subjects to contribute to both unexposed (during waiting period) and exposed person time.

Figure 1. Immortal Time Bias

Lévesque, Linda E., et al. “Problem of immortal time bias in cohort studies: example using statins for preventing progression of diabetes.” Bmj 340 (2010): b5087.

Protopathic Bias or Reverse Causation

Protopathic bias occurs when a drug of interest is initiated to treat symptoms of the disease under study before it is diagnosed. For example, early symptoms of inflammatory bowel disease (IBD) are often consistent with the indications for prescribing proton pump inhibitors (PPIs). Thus, many individuals who develop IBD have a history of PPI use. A study to investigate the association between PPIs and subsequent IBD would likely conclude that taking PPIs causes IBD when, in fact, the IBD was present (but undiagnosed) before the PPIs were prescribed.  This scenario is illustrated by the following steps:

• Patient has early symptoms of an underlying disease (e.g. acid reflux)
• Patient goes to his/her doctor and gets a drug to address symptoms (e.g. PPI)
• Patient goes on to develop a diagnosis of having IBD (months or even years later)

It is easy to conclude from the above scenario that PPIs cause IBD, however the acid reflux was actually a manifestation of underlying IBD that was not yet diagnosed.  Protopathic bias occurs in this case because of the lag time between first symptoms and diagnosis. One effective way to address protopathic bias is by excluding exposures during the prodromal period of the disease of interest.

Drug Exposure Definition and Measurement

Defining and classifying exposure to a drug is critical to the validity of pharmacoepidemiology studies. Most pharmacoepidemiology studies use proxies for drug exposure, because it is often impractical or impossible to measure directly (e.g. observing a patient take a drug, monitoring blood levels). In lieu of actual exposure data, exposure ascertainment is typically based on medication dispensing records. These records can be ascertained from electronic health records, pharmacies, pharmacy benefit managers (PBMs), and other available healthcare data repositories. Some of the most comprehensive drug exposure data are available among Northern European countries and large integrated health systems such as Kaiser Permanente in the United States. Some strengths of using dispensing records to gather exposure data are:

• Easy to ascertain and relatively inexpensive
• No primary data collection
• Often available for large sample sizes
• Can be population based
• No recall or interviewer bias
• Linkable to other types of data such as diagnostic codes and labs

Limitations of dispensing records as a data source include:

• Completeness can be an issue
• Usually does not capture over-the-counter (OTC) drugs
• Dispensing does not guarantee ingestion
• Often lacks indication for use
• Must make some assumptions to calculate dose and duration of use

Some studies collect drug exposure data using self-report methods (e.g. interviews or surveys). These methods are useful when the drug of interest is OTC and thus not captured by dispensing records. However, self-reported data is subject to recall bias and requires additional considerations when interpreting results. Alternatively, some large epidemiologic studies require patients to bring in all their medications when they do their study interviews (eg. bring your brown bag of medications). This can provide a more reliable method of collecting medication information than self-report.

It is also important to consider the risk of misclassification of exposure. When interpreting results, remember that differential misclassification (different for those with and without disease) can result in either an inflated measure of association, or a measure of association that is closer to the null. In contrast, non-differential misclassification (unrelated to the occurrence or presence of disease) shifts the measure of association closer to the null. For further guidance on defining drug exposure, please look at Figure 2.

Figure 2. Checklist: Key considerations for defining drug exposure

Velentgas, Priscilla, et al., eds. Developing a protocol for observational comparative effectiveness research: a user’s guide. Government Printing Office, 2013.

As alluded to above, pharmacoepidemiology is a field with complex research methods. We hope this article clarifies these three challenging concepts.

[i](Pinar Balcik, Gulcan Kahraman “Pharmacoepidemiology.” IOSR Journal of Pharmacy (e)-ISSN: 2250-3013, (p)-ISSN: 2319-4219 Volume 6, Issue 2 (February 2016), PP. 57-62)

Vaccine education in Africa: unmet needs and current programs

Written by Karen Guo.

Disease burden:

Childhood immunization – inducing immunity by applying a vaccine – almost guarantees protection from many major diseases. Childhood vaccination prevents 2 million deaths per year worldwide and is widely consideredto be overwhelmingly good by the scientific community.   However, 2.5 million deaths a year continue to be caused by vaccine-preventable diseases, mainly in Africa and Asia among children less than 5 years old.

Demand-related factors, such as parents’ knowledge about vaccination and immunization and their attitudes towards them, are also likely to influence uptake. What remains unclear, however, is whether people’s attitudes are more strongly influenced by the perceived benefits of vaccination or by the perceived risks of not being vaccinated.

Education unmet needs:

Parents’ knowledge about vaccinations is poor, and the knowledge they do have is often wrong. It appears that there is no association between parents’ knowledge and vaccination coverage rates, and the public seems to accept vaccinationdespite limited knowledge about it. One thing is clear, however: when parents resist vaccination, it is because they want to protect their children from harm. In 2003, political and religious leaders in three Nigerian states boycotted a WHO polio vaccination campaign, claiming that the vaccine caused sterility and AIDS. Similarly, certain Hindu and Muslim groups in India have long held the belief that vaccination is a covert method of family planning, primarily targeting Muslims. The greater acceptance of vaccination found among Javanese migrants as opposed to Acehnese villagersin the same area has been attributed to the former’s more positive cultural attitudes towards health. Both groups were found to have an equally poor understanding of vaccination and health in general. Similarly, followers of the Aga Khan in Pakistan were found to be receptive to ‘biomedical’ or ‘western’ medicine and reasoning despite the fact that as a group they were largely illiterate and understood little about vaccination. Cultural receptivity to perceived modernity and education, as well as trust in health workers, were considered to be the most important factors influencing attitudes. In short, knowing little about vaccination does not necessarily translate into negative attitudes towards it; factors such as trust (e.g. in health-care providers or ‘western’ medicine) and culture may be more influential. The impact of high levels of knowledge on subsequent attitudes towards vaccination is unknown.

The fundamental question is whether or not resources should be invested in improving parents’ knowledge of and attitudes towards vaccination. Although the evidence is unclear, it is commonly believed that strengthening advocacy, communication and social mobilization will enhance informed and willing participation in vaccination programs and that vaccination strategies are likely to be more successful if they are based on an understandingof sociocultural behavior.  Yet these approaches are not routinely incorporated into vaccination policy. Since factors influencing demand vary greatly by region and context, findings from one population cannot always be extrapolated to another. Thus, simple operational research into local knowledge and attitudes should become an essential part of every vaccination campaign. Current research into parents’ knowledge and attitudes towards childhood vaccination is disproportionately low considering the enormous scale and relevance of this issue. In order for such efforts to be successful, parents must be empowered to freely and clearly express their attitudes towards childhood vaccination.

Current vaccine education program in Africa:

In line with the principles and areas of work outlined in the Global Vaccine Action Plan and under the advice of both the Strategic Advisory Group of Experts (SAGE) on immunization and the Task Force on Immunization (TFI), WHO/AFRO is taking steps to address vaccine preventable diseases by implementing strategies for reaching all eligible persons with effective vaccines. One of the strategies is the implementation of the African Vaccination Week (AVW), which provides a platform for Member States to speak through one collective voice to advocate for immunization as a public health priority in the Region, and to achieve high immunization coverage. The overarching objective of the initiative is to target people with limited access to regular health services, thereby working to close the gaps in immunization.

The over-arching slogan of AVW is “Vaccinated communities, Healthy communities”. Each year, a suitable theme is chosen to reflect current regional priorities and the public health realities. The first AVW was celebrated in April 2011 under the theme “Put mothers and children first – Vaccinate and stop polio now”. That year, 40 countries in the Region organized activities to celebrate the event. In subsequent years, countries have continued to conduct vaccination campaigns, catch up vaccination activities, conduct health promotion activities, and provide other child survival interventions.

To explore the shape of one AVW in particular, in 2016 they worked under the theme of “Close the immunization gap. Stay polio free!” The kick-off occurred on April 24, 2016, the same day as the kick-off of the World Immunization Week (WIW) and vaccination week in the other 5 WHO regions. The celebration of that year’s AVW also coincided with the globally synchronized switch from trivalent oral polio vaccine (tOPV) to bivalent OPV (bOPV) occurring during the period from April 17 to May 1 2016. It also followed 2 important events that had taken place during the previous 6 months: Nigeria’s removal from the list of polio-endemic countries in September 2015 (highlighting the need for countries to stay vigilant in the fight against polio) and the first ever Ministerial Conference on Immunization in Africa (MCIA) held in February 2016 in Addis Ababa, Ethiopia. The Regional launch was held in Ganta, Nimba County, Liberia on April 25 2016 during a colorful function chaired by the Deputy Minister of Health Services in the presence of high-level officials and community leaders. The event was combined with the celebration of World Malaria Day and the introduction of 2 new vaccines (rotavirus vaccine in the entire country and human papilloma vaccine (HPV) as a demonstration project) into the national immunization schedule. The Ministry of Health with support from the Ministry of Education, UNICEF, and WHO conducted a national quiz on immunization; information on immunization was sent out to schools all over the country to equip students with knowledge prior to the quizzes that were held at district, regional, and national levels. The media were engaged to disseminate messages and information especially on polio, Hepatitis B vaccination, and the HPV vaccine. Increased advocacy for mothers to attend antenatal care and to deliver from health facilities was heightened. Mothers were also being tested for HIV to prevent mother-to-child transmission of HIV.

Integration of other interventions with immunization during AVW in the African Region is common and has shown potential for improving immunization coverage, as this dedicated period is used both for catch-up campaigns and periodic intensified routine immunization. While its impact may call for further examination, it is a potential platform for integrated delivery of health interventions to people with limited access to regular health service.