Trends for Performance-based Risk-sharing Arrangements

Author: Shuxian Chen

Chen_Shuxian
CHOICE Student Shuxian Chen

When considering the approval of new drugs, devices and diagnostic products, there’s always a tension between making the product’s benefits available to more people and collecting more information in trials. The restrictive design of randomized-controlled trials (RCTs) mean that their indications of effectiveness don’t always hold in the real world. They’re also unlikely to detect long-term adverse events. This uncertainty and risk make it hard for payers to make coverage decisions for new interventions.

Performance-based risk-sharing arrangements (PBRSAs), also known as patient access schemes (PAS), managed entry arrangements, and coverage with evidence development (CED), help to reduce such risk. These are arrangements between a payer and a pharmaceutical, device, or diagnostic manufacturer where the price level and/or nature of reimbursement is related to the actual future performance of the product in either the research or ‘real world’ environment rather than the expected future performance [1].

I recently developed a review paper with CHOICE faculty Josh Carlson and Lou Garrison that gave an update of the trends in PBRSAs both in the US and globally. Using the University of Washington Performance-Based Risk-Sharing Database, we have identified 437 eligible cases between 1993 and 2016 from that contains information obtained by searching Google, PubMed, and government websites. Eighteen cases have been added to the database in 2017 and 2018. Seventy-two cases are from the US.

Figure 1. Eligible cases between 1993-2016 by country

Chen_Bar Graph_March16_2018

Australia, Italy, the US, Sweden and the UK are the five countries that have the largest number of PBRSAs. (Distribution of cases from different countries can be seen in Graph 1.) Except for the US, cases from the other four countries are identified from their government programs: the Pharmaceutical Benefits Scheme (PBS) in Australia, the Italian Medicines Agency (AIFA) in Italy, the Swedish Dental and Pharmaceutical Benefits Agency (TLV) in Sweden, and the National Institute for Health and Care Excellence (NICE) in the UK. These single-payer systems have more power in negotiating drug price with the manufacturer than we do in the US.

Cases in the US are more heterogeneous, with both public (federal/state-level) and private payers involved. The US Centers for Medicare and Medicaid Services (CMS) contributes to 25 (37%) of the 72 US cases. Among these, most arrangements involve medical devices and diagnostic products and originate in the CED program at CMS [2]. This program is used to generate additional data to support national coverage decisions for potentially innovative medical technologies and procedures, as coverage for patients is provided only in the context of approved clinical studies [3]. For pharmaceuticals, there have been few PBRSAs between CMS and manufacturers – no cases established between 2006 and 2016. However, in August 2017, Novartis announced that a first-of-its-kind collaboration with the CMS has been made: a PBRSA for Kymriah™ (tisagenlecleucel), their novel cancer treatment for B-cell acute lymphoblastic leukemia that uses the body’s own T cells to fight cancer [4]. The arrangement allows for payment only when participants respond to Kymriah™ by the end of the first month. It can be categorized as performance-linked reimbursement (PLR), as reimbursement is only provided to the manufacturer if the patient meets the pre-specified measure of clinical outcomes. This recent collaboration may lead to a larger number and more variety of PBRSAs between pharmaceutic manufacturers and the CMS.

Please refer to our article for more detailed analyses regarding the trends in PBRSAs.

References:

[1] Carlson JJ, Sullivan SD, Garrison LP, Neumann PJ, Veenstra DL. Linking payment to health outcomes: a taxonomy and examination of performance-based reimbursement schemes between healthcare payers and manufacturers. Health Policy. 2010;96(3): 179–90. doi:10.1016/j.healthpol.2010.02.005.

[2] CMS. Coverage with Evidence Development. Available at: https://www.cms.gov/Medicare/Coverage/Coverage-with-Evidence-Development/

[3] Neumann PJ, Chambers J. Medicare’s reset on ‘coverage with evidence development’. Health Affairs Blog. 2013 Apr 1. http://healthaffairs.org/blog/2013/04/01/medicares-reset-on-coverage- with-evidence-development/

[4] Novatis. Novartis receives first ever FDA approval for a CAR-T cell therapy, Kymriah(TM) (CTL019), for children and young adults with B-cell ALL that is refractory or has relapsed at least twice. 2017. Available at: https://www.novartis.com/news/media-releases/novartis-receives-first-ever-fda-approval-car-t-cell-therapy-kymriahtm-ctl019

A visual primer to instrumental variables

By Kangho Suh

When assessing the possible efficacy or effectiveness of an intervention, the main objective is to attribute changes you see in the outcome to that intervention alone. That is why clinical trials have strict inclusion and exclusion criteria, and frequently use randomization to create “clean” populations with comparable disease severity and comorbidities. By randomizing, the treatment and control populations should match not only on observable (e.g., demographic) characteristics, but also on unobservable or unknown confounders. As such, the difference in results between the groups can be interpreted as the effect of the intervention alone and not some other factors. This avoids the problem of selection bias, which occurs when the exposure is related to observable and unobservable confounders, and which is endemic to observational studies.

In an ideal research setting (ethics aside), we could clone individuals and give one clone the new treatment and the other one a placebo or standard of care and assess the change in health outcomes. Or we could give an individual the new treatment, study the effect the treatment has, go back in time through a DeLorean and repeat the process with the same individual, only this time with a placebo or other control intervention. Obviously, neither of these are practical options. Currently, the best strategy is randomized controlled trials (RCTs), but these have their own limitations (e.g. financial, ethical, and time considerations) that limit the number of interventions that can be studied this way. Also, the exclusion criteria necessary to arrive at these “clean” study populations sometimes mean that they do not represent the real-world patients who will use these new interventions.

For these reasons, observational studies present an attractive option to RCTs by using electronic health records, registries, or administrative claims databases. Observational studies have their own drawbacks, such as selection bias detailed above. We try to address some of these issues by controlling for covariates in statistical models or by using propensity scores to create comparable study groups that have similar distributions of observable covariates (check out the blog entry of using propensity scores by my colleague Lauren Strand). Another method that has been gaining popularity in health services research is an econometric technique called instrumental variables (IV) estimation. In fact, two of my colleagues and the director of our program (Mark Bounthavong, Blythe Adamson, and Anirban Basu, respectively) wrote a primer on the use of IV here.

In their article Mark, Blythe, and Anirban explain the endogeneity issue when the treatment variable is associated with the error term in a regression model. For those of you who might still be confused (I certainly was for a long time!), I’ll use a simple figure1that I found in a textbook to explain how IVs work.

Instrumental Variables

1p. 147 from Kennedy, Peter. A Guide to Econometrics 6th Edition. Oxford: Blackwell Published, 2008. Print

The figure uses circles to represent the variation within variables that we are interested in: each circle represents the treatment variable (X), outcome variable (Y), or the instrumental variable (Z). First, focus on the treatment and outcome circles. We know that some amount of the variability in the outcome is explained by the treatment variable (i.e. treatment effect); this is indicated by the overlap between the two circles (red, blue, and purple). The remaining green section of the outcome variable represents the error (ϵ) obtained with a statistical model. However, if treatment and ϵ are not independent due to, for example, selection bias, some of the green spills over to the treatment circle, creating the red section. Our results are now biased, because a portion (red) of the variation in our outcome is attributed to both treatment and ϵ.

Enter in instrumental variable Z. It must meet two criteria: 1) be strongly correlated with treatment (large overlap of instrument and treatment) and 2) not be correlated with the error term (no overlap with red or green). In the first stage, we regress treatment on the instrument, and obtain the predicted values of treatment (orange and purple). We then regress the outcome on the predicted values of treatment to get the treatment effect (purple). Because we have only used the exogenous part of our treatment X to explain Y, our estimates are unbiased.

Now that you understand the benefit of IV estimators visually, maybe you can explicitly see some of the drawbacks as well. The information used to estimate the treatment effect became much smaller. It went from the overlap between treatment and outcome (red, blue, and purple) to just the purple area. As a result, while the IV estimator may be unbiased, it has more variance than a simple OLS estimator. One way to improve this limitation is to have an instrument that is highly correlated with treatment to make the purple area as large as possible.

A more concerning limitation with IV estimation is the interpretability of results, especially in the context of treatment effect heterogeneity. I will write another blog post about this issue and how it can be addressed if you have a continuous IV, using a method called person-centered treatment (PeT) effects that Anirban created.  Stay tuned!

ISPOR’s Special Task Force on US Value Assessment Frameworks: A summary of dissenting opinions from four stakeholder groups

By Elizabeth Brouwer


IsporLogo2018bg

The International Society for Pharmacoeconomics and Outcomes Research (ISPOR) recently published an issue of their Value in Health (VIH) journal featuring reports on Value Assessment Frameworks. This marks the culmination of a Spring 2016 initiative “to inform the shift toward a value-driven health care system by promoting the development and dissemination of high-quality, unbiased value assessment frameworks, by considering key methodological issues in defining and applying value frameworks to health care resource allocation decisions.” (VIH Editor’s note) The task force summarized and published their findings in a 7-part series, touching on the most important facets of value assessment. Several faculty of the CHOICE Institute at the University of Washington authored portions of the report, including Louis Garrison, Anirban Basu and Scott Ramsey.

In the spirit of open dialogue, the journal also published commentaries representing the perspectives of four stakeholder groups: payers (in this case, private insurance groups), patient advocates, academia, and the pharmaceutical industry. While supportive of value assessment in theory, each commentary critiqued aspects of the task force’s report, highlighting the contentious nature of value assessment in the US health care sector.

Three common themes emerged, however, among the dissenting opinions:

  1. Commenters saw CEA as a flawed tool, on which the task force placed too much emphasis

All commentaries except the academic perspective bemoaned the task force’s reliance on cost-effectiveness analysis. Payers, represented in an interview of two private insurance company CEOs, claimed that they do not have a choice on whether to cover most new drugs. If it’s useful at all, then, CEA informs the ways that payers distinguish between drugs of the same class. The insurers went on to claim that they are more interested in the way that CEA can highlight high-value uses for new drugs, as most are expected to be expensive regardless.

Patient advocates also saw CEA as a limited tool and were opposed to any value framework overly dependent on the cost per QALY paradigm.  The commentary equated CEAs to clinical trials—while informative, they imperfectly reflect how a drug will fare in the real world. Industry representatives, largely representing the PhRMA Foundation, agreed that the perspective provided by CEAs is too narrow and shouldn’t be the cornerstone for value assessment, at least in the context of coverage and reimbursement decisions.

  1. Commenters disagreed with how the task force measured benefits (the QALY)

All four commentaries noted the limitations the quality-adjusted life-year (QALY). The patient advocates and the insurance CEOs both claimed that the QALY did not reflect their definition of health benefits. The insurance representatives reminded us that their businesses don’t give weight to societal value because it is not in their business model. Similarly, the patient advocate said the QALY did not reflect patient preferences, where value is more broadly defined. The QALY, for example, does not adequately capture the influence of health care on functionality, ability to work, or family life. The patient advocate noted that while the task force identified these flaws and their methodological difficulties, it stopped short of recommending or taking any action to address them.

Industry advocates wrote that what makes the QALY useful—it’s ability to make comparisons across most health care conditions and settings—is also what makes it ill-suited for use in a complex health care system. Individual parts of the care continuum cannot be considered in isolation. They also noted that the QALY is discriminatory to vulnerable populations and was not reflective of their customers’ preferences.

Mark Sculpher, Professor at the University of York representing health economic theory and academia, defended the QALY to an extent, noting that the measure is the most suitable available unit for measuring health. He acknowledged the QALY’s limitations in capturing all the benefits of health care, however, and noted that decision makers and not economists should be the ones defining benefit.

 

  1. Commenters noticed a disconnect between the reports and social/political realities

Commenters seemed disappointed that the task force did not go further in directing the practical application of value assessment frameworks within the US health care sector. The academic representative wrote that, while economic underpinnings are important, ultimately value frameworks need to be useful to, and reflect the values of, the decision makers. He argued that decision-makers’ buy-in is invaluable, as they hold the power to implement and execute resource allocation. Economics can provide a foundation for this but should not be the source of judgement relating to value if the US is going to take-up value assessment frameworks to inform decisions.

Patient advocates and industry representatives went further in their criticism, saying the task force seemed disconnected from the existing health care climate. The patient advocate author felt the task force ignored the social and political realities in which health care decisions are made. Industry representatives pointed out that current policy, written in the Patient Protection and Affordable Care Act (PPACA), prohibited a QALY-based CEA because most decision makers in the US believe it inappropriate for use in health care decision making. Both groups wondered why the task force continued to rely on CEA methodology when it had been prohibited by the public sector.

 

The United States will continue to grapple with value assessment as it seeks to balance innovation with budgetary constraints. The ISPOR task force ultimately succeeded in its mission, which was never to specify a definitive and consensual value assessment framework, but instead to consider “key methodological issues in defining and applying value frameworks to health care resource allocation decisions.”

The commentaries also succeeded in their purpose: highlighting the ongoing tensions in creating value assessment frameworks that stakeholders can use. There is a need to improve tools that value health care to assure broader uptake, along with a need to accept flawed tools until we have better alternatives. The commentaries also underscore a chicken-and-egg phenomenon within health care policy. Value assessment frameworks need to align with the goals of decision-makers, but decision-makers also need value frameworks to help set goals.

Ultimately, Mark Sculpher may have summarized it best in his commentary. Value assessment frameworks ultimately seek to model the value of health care technology and services. But as Box’s adage reminds us: although all models are wrong, some are useful. How to make value assessment frameworks most useful moving forward remains a lively, complex conversation.

Letter from an Editor

The following emails are real correspondence between myself and an Associate Editor at a prestigious scientific journal, which I have called “Journal X” here, after I published an article in a journal with a very similar title, which I am calling “Journal Y”. I have removed the journal and editor names for privacy.

letter from an editor
Photo by Scott Kidder

Dear Dr. Adamson,

Congratulations on your recent excellent paper. May I ask if you specifically targeted your paper for the “Journal Y” (a relatively new “look-alike” entry to the field in 2013)?  I’m an Associate Editor for the more established “Journal X.” I was curious to what extent authors may be submitting to other “look-alike” journals to intentionally vs. not.

Thx,

Dr. Editor

Associate Editor

“Journal X”


 

Dear Dr. Editor,

Thank you for contacting me about this recently published paper. I am grateful for the opportunity to explain why I chose to submit to the open access “Journal Y” instead of the well-respected “Journal X” and I would like to hear your thoughts. I am cautious about predatory journals, and I consciously wrestled with the pros and cons in journal selection.

I originally prepared this manuscript for submission for “Journal X,” where I thought it would fit nicely in one of the online-only sections. When the manuscript was finalized, I received an invitation to contribute to a special issue of “Journal Y” that included a publication fee waiver.

Having never heard of this look-alike journal before, I read and considered these elements: credibility, cost, ethics, and time to the wide-spread dissemination of the findings.

First, I checked the credibility of “Journal Y” by reading past issues. The quality was good, and they had published a review paper in 2014 by my research heroes Drs. A and B. I thought, “if this journal is good enough for A and B, it is certainly good enough for me.”

I noticed the “Journal Y” had very low or no publication fees for being Open Access, and this signaled a difference from predatory journals with low standards and high profits. Since my current trainee funding does not cover publication fees, the paid Open Access option through your journal was not an option unless I used personal savings.

The University of Washington Biomedical Research Integrity Series lectures on the ethics of responsible publishing changed my opinions about Open Access vs subscription journals. I take seriously my moral responsibility as an HIV researcher to communicate findings in a format accessible to scientists and patients in communities disproportionately affected by this disease. Many of my brilliant economist and mathematician colleagues are based at small institutions and companies with limited funding to purchase articles.

At the end, I am pleased to have chosen “Journal Y” over “Journal X” for several reasons:

  • Less than 8 weeks passed from my submission to publication (including two rounds of revisions)
  • I received rigorous and helpful peer-review comments
  • No cost for Open Access (with invitation to the special issue, or low cost otherwise)
  • Within the first month, the full text has been downloaded more than 200 times in six continents.
  • Last year I submitted a different and very good, in my opinion, paper to your journal. After more than two months, I received a rejection from with peer-review comments that were so mean and personal it was borderline unprofessional. While this look-alike journal does not have the prestige or impact factor of X, I see my younger generation progressively placing more value on the quality of content, free accessibility, dialogues, and citations of individual papers rather than on the sum of journal impact factors on a CV.

Because of these reasons, I am committed to Open Access publishing when I have the opportunity and sufficient funds to do so. I admit there is quite a lot for me to still learn about scientific publications, and so I would greatly appreciate your expert feedback on these considerations.

Thank you for taking the time to read my paper and reach out to me personally with questions.

Sincerely,

Blythe Adamson


 

Dear Dr. Adamson,

Thank you very much for your thoughtful reply.  I’m glad to hear that this was an intentional (vs. accidental) decision. Please also accept my apologies for the “borderline unprofessional” peer review comments you received when submitting to “Journal X”; definitely discouraging to authors.

I will pass on your comments to the other Associate Editors of “Journal X” on our quarterly conference call for discussion on how we can improve and hopefully attract your future papers.

Best regards,

Dr. Editor

Associate Editor

“Journal X”

Reminders About Propensity Scores

Propensity score (PS)-based models are everywhere these days.  While these methods are useful for controlling for unobserved confounders in observational data and for reducing dimensionality in big datasets, it is imperative that analysts should use good judgement when applying and interpreting PS analyses. This is the topic of my recent methods article in ISPOR’s Value and Outcomes Spotlight.

I became interested in PS methods during my Master’s thesis work on statin drug use and heart structure and function, which has just been published in Pharmacoepidemiology and Drug Safety. To estimate long-term associations between these two variables, I used the Multi-Ethnic Study of Atherosclerosis (MESA), an observational cohort of approximately 6000 individuals with rich covariates, subclinical measures of cardiovascular disease, and clinical outcomes over 10+ years of follow-up. We initially used traditional multivariable linear regression to estimate the association between statin initiation and progression of left ventricular mass over time but found that using PS methods allowed for better control for unobserved confounding. After we generated PS for the probability of starting a statin, we used matching procedures to match initiators and non-initiators, and estimated an average treatment effect in the treated. Estimates from both traditional regressions and PS-matching procedures found a small, dose-dependent protective effect of statins against left ventricular structural dysfunction. This finding of very modest association contrasts with findings from much smaller, short-term studies.

I did my original analyses using Stata, where there are a few packages for PS including psmatch2 and teffects. My analysis used psmatch2, which is generally considered inferior to teffects because it does not provide proper standard errors. I got around this limitation, however, by bootstrapping confidence intervals, which were all conservative compared with teffects confidence intervals.

pscores1
Figure 1: Propensity score overlap among 835 statin initiators and 1559 non-initiators in the Multi-Ethnic Study of Atherosclerosis (MESA)

Recently, I gathered the gumption to redo some of the aforementioned analysis in R. Coding in R is a newly acquired skill of mine, and I wanted to harness some of R’s functionality to build nicer figures. I found this R tutorial from Simon Ejdemyr on propensity score methods in R to be particularly useful. Rebuilding my propensity scores with a logistic model that included approximately 30 covariates and 2389 participant observations, I first wanted to check the region of common support. The region of common support is the overlap between the distributions of PS for the exposed versus unexposed, which indicates the comparability of the two groups. Sometimes, despite fitting the model with every variable you can, PS overlap can be quite bad and matching can’t be done. But I was able to get acceptable overlap on values of PS for statin initiators and non-initiators (see Figure 1). Using the R package MatchIt to do nearest neighbor matching with replacement, my matched dataset was reduced to 1670, where all statin initiators matched. I also checked covariate balance conditional on PS in statin initiator and non-initiator groups. Examples are in Figure 2.  In these plots, the LOWESS smoother is effectively calculating a mean of the covariate level at the propensity score. I expect the means for statin initiators and non-initiators to be similar, so the smooths should be close. In the ends of the age distribution, I see some separation, which is likely to be normal tail behavior. Formal statistical tests can also be used to test covariates balance in the newly matched groups.

pscores2
Figure 2: LOWESS smooth of covariate balance for systolic blood pressure (left) and age (right) across statin initiators and non-initiator groups (matched data)

Please see my website for additional info about my work.

Exit interview: CHOICE alumna Elisabeth Vodicka

Editor’s note: This is the first in an ongoing series of interviews we’ve planned for the students graduating from the CHOICE Institute where we’ll get their thoughts on their grad school and dissertation experiences.

This first interview is with Elisabeth Vodicka, who defended her dissertation, “Cervical Cancer in Low-Income Settings: Costs and Cost-Effectiveness of Screening and Treatment,” on January 24th, 2018.

  • What’s your dissertation about?

evodickaMy dissertation focused on the economics of integrating cervical cancer screening and treatment into existing health systems in East Africa (Kenya and Uganda). Although cervical cancer is preventable and treatable if detected early, screening rates are low in the region (3-20% depending on regional characteristics). One strategy for improving access to potentially life-saving screening is to leverage the fact that women engage with various health systems platforms for other types of care, like for family planning, taking their children in for vaccinations, tuberculosis and HIV-treatment, etc. Since these programs are already funded and staffed, screening could be offered to women in these settings via service integration for potentially low marginal costs.

To understand the economic impact of offering cervical cancer screening and treatment to women when and where they are already engaging with the health system, I conducted costing, cost-effectiveness and budget impact analyses of integrating screening into two health care settings. First, I collected primary data and conducted a micro-costing analysis to determine direct medical, non-medical and indirect costs associated with integrating screening services into an HIV-treatment center in Kenya. For my subsequent aims, I conducted economic evaluations evaluating the potential value created in terms of cost per life year saved and budget impact to the Ministry of Health of integrating screening and treatment into HIV-treatment centers in Kenya and routine childhood immunization clinics in Uganda.

  • How did you arrive at that topic? When did you know that this is what you wanted to study?

I have long been passionate about women’s health issues and improving access to care in low-resource settings. During my second year in the program, I was offered an opportunity through the University of Washington’s Treatment, Research, and Expert Education program to conduct a micro-costing study to identify the costs associated with providing cervical cancer screening to women attending receiving HIV treatment at Coptic Hope Center for Infectious Diseases in Nairobi, Kenya. This opportunity presented a perfect overlap of my interests in women’s health, access, and health economics methods. After conducting the primary data collection, I began exploring the possibility to continue this line of research through my dissertation.

  • What was your daily schedule like when you were working on your dissertation?

To be honest, I can’t say that I had a daily schedule that was consistent over the years of working on my dissertation. While I developed my short and long proposals, I was still in classes, working as an RA, and doing consulting work on the side. These commitments often dictated the time I had to focus on preparing my dissertation proposals, which I often worked on late at night. Once I passed my general exam, my time was much more flexible. I tend to be most productive at night, so took advantage of time flexibility to make the most of my productive hours. Often this meant exercising and taking care of other tasks during the day and then leveraging my peak brain time to work on my dissertation in the evenings.

Additionally, creating regular social opportunities and accountability for my dissertation progress were key success strategies for me. My cohort and I created a dissertation writing group that met weekly to create accountability. Toward the final months of the dissertation (crunch time!), I joined a co-working space, used an online co-working app, and recruited friends and family to work together virtually and in-person to maximize accountability and meet my goals for each dissertation aim.

  • If you are willing to share, in what quarter did you submit your final short proposal and in what quarter did you graduate/defend? What were some factors that determined your dissertation timeline? 

I submitted my short proposal in Fall Quarter 2015, and I defended in Winter Quarter 2018. Together, my chair and I developed a timeline that mapped out each stage of the dissertation process from the short proposal to final defense.

  • How did you fund your dissertation?

Ongoing funding was received through work as an RA and TA. The TREE program generously supported my in-country work in Kenya. I also received additional financial and travel support through internal funding within CHOICE (e.g., Reducing Barriers for the Ambitious Fund, Rubenstein Endowment, etc.).

  • What comes next for you? What have you learned about finding work after school?

Currently, I am continuing to work as a freelance consultant on projects related to expanding access to care in low- and middle-income settings. This allows me the time and flexibility to target my employment search within groups that are an excellent fit – both organizationally and culturally – for my research interests and professional goals.

In terms of finding employment after school, the most important lesson that I have learned is to start early and network broadly. During my first year in the program, I set a goal to reach out to one new person in the field every month working on topics or in organizations that interested me. Over time and many networking coffees later, I learned about the types of organizations that might be a good fit for my interests, work style and personality, and developed positive relationships with other like-minded individuals.

CHOICE Institute Director Discusses Amazon Health Care Announcement

You may have heard the big news that came out of Seattle recently: Amazon is partnering with Berkshire Hathaway and JPMorgan Chase to address health care costs and quality by creating an independent health care company for their employees. Further details of their plan remain a secret to the general public, and the companies are likely still working out logistics amongst themselves. Given the 1.2 million employees involved in the three companies, however, many in the health care industry are thinking through the likely impact of this new partnership.

amazon

Director of the CHOICE Institute and professor of health economics at the University of Washington, Anirban Basu was recently referenced in two regional blogs describing the potential significance of the proposed plan:

According to Anirban Basu, a health care economist at the University of Washington, the trio could do a number of things to reform the health care system just by their sheer size and power alone. While most small and individual health care buyers have little power when it comes to directly negotiating with either health care providers or pharmaceutical companies, this partnership could change that—at least for those who qualify for it. Currently, price negotiating falls on third-party pharmacy benefit managers, at a cost then passed on to consumers.

Besides taking on bargaining power, Basu says Amazon may even open primary care clinics for their employees, but this could expand beyond their base.

It is important to note that while the new health plan may eventually have industry-wide effects, its scope will be limited to the companies’ employees at the beginning. And it is hardly a new phenomenon for employer groups to choose self-insurance as a means to control costs.

Henry Ford was one of the first industry giants to start his own health care insurance and delivery system in 1915, and America’s largest managed care organization, Kaiser Permanente, originally started as a health care program for employees of the Kaiser steel mills and shipyards.

Another important item to note is that America’s health care system has already been undergoing fundamental changes. While the United States Congress remains divided about how to move forward with the Affordable Care Act and improve the nation’s health care system overall, private health care companies are making their own moves. Hospital and insurance markets are becoming increasingly consolidated (with less competition to control prices), and some health care stakeholders are partnering and consolidating in innovative ways to capture market share (for example, the pharmacy company CVS Health just bought insurance company Aetna in January 2018).

Amazon’s new health care company could simply be joining these trends: historic trends of self-insuring companies to cut costs or newer trends of consolidating aspects of American health care for increased market power. However, it is entirely conceivable that the potent combination of Amazon (a technology industry giant), JPMorgan Chase (a banking industry giant), and Berkshire Hathaway (an investment giant) will bring something new to the table. Vox and StaTECHery are among many media outlets offering interesting predictions.

After the announcement, stock prices for major health care industries (e.g., Anthem, UnitedHealth, CVS, and Walgreens) experienced a sell-off as investors worry about the implications. However, experts believe that the current market would weather the storm due to the massive operational costs necessary for the partnership to enter the health care market. Moreover, the scale of Amazon, Berkshire Hathaway, and JPMorgan Chase will not be enough to compete with larger health care industry giants that already have purchasing power.

Will this health care partnership be a game changer? Perhaps, perhaps not. But as health care economists and health policy enthusiasts, students at the CHOICE Institute will certainly be watching our neighbors with interest.

[Written with the assistance of Mark Bounthavong and Nathaniel Hendrix.]