Loading...

Follow The Academic Health Economists' Blog on Feedspot

Continue with Google
Continue with Facebook
or

Valid

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Thomas Hoe who has a PhD from University College London. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Essays on the economics of health care provision
Supervisors
Richard Blundell, Orazio Attanasio
Repository link
http://discovery.ucl.ac.uk/10048627/

What data do you use in your analyses and what are your main analytical methods?

I use data from the English National Health Service (NHS). One of the great features of the NHS is the centralized data it collects, with the Hospital Episodes Statistics (HES) containing information on every public hospital visit in England.

In my thesis, I primarily use two empirical approaches. In my work on trauma and orthopaedic departments, I exploit the fact that the number of emergency trauma admissions to hospital each day is random. This randomness allows me to conduct a quasi-experiment to assess how hospitals perform when they are more or less busy.

The second approach I use, in my work on emergency departments with Jonathan Gruber and George Stoye, is based on bunching techniques that originated in the tax literature (Chetty et al, 2013; Kleven and Waseem, 2013; Saez, 2010). These techniques use interpolation to infer how discontinuities in incentive schemes affect outcomes. We apply and extend these techniques to evaluate the impact of the ‘4-hour target’ in English emergency departments.

How did you characterise and measure quality in your research?

Measuring the quality of health care outcomes is always a challenge in empirical research. Since my research primarily relies on administrative data from HES, I use the patient outcomes that can be directly constructed from this data: in-hospital mortality, and unplanned readmission.

Mortality is, of course, an outcome that is widely used, and offers an unambiguous interpretation. Readmission, on the other hand, is an outcome that has gained more acceptance as a measure of quality in recent years, particularly following the implementation of readmission penalties in the UK and the US.

What is ‘crowding’, and how can it affect the quality of care?

I use the term crowding to refer, in a fairly general sense, to how busy a hospital is. This could mean that the hospital is physically very crowded, with lots of patients in close proximity to one another, or that the number of patients outstrips the available resources.

In practice, I evaluate how crowding affects quality of care by comparing hospital performance and patient outcomes on days when hospitals deal with different levels of admissions (due to random spikes in the number of trauma admissions). I find that hospitals respond by not only cancelling some planned admissions, such as elective hip and knee replacements, but also discharge existing patients sooner. For these discharged patients, the shorter-than-otherwise stay in the hospital is associated with poorer health outcomes for patients, most notably an increase in subsequent hospital visits (unplanned readmissions).

How might incentives faced by hospitals lead to negative consequences?

One of the strongest incentives faced by public hospitals in England is to meet the government-set waiting time target for elective care. This target has been very successful at reducing wait times. In doing so, however, it may have contributed to hospitals shortening patient stays and increasing patient admissions.

My research shows that shorter hospitals stays, in turn, can lead to increases in unplanned readmissions. Setting strong wait time targets, then, is in effect trading off shorter waits (from which patients benefit) with crowding effects (which may harm patients).

Your research highlights the importance of time in the hospital production process. How does this play out?

I look at this from three dimensions, each a separate part of a patient’s journey through hospital.

The first two relate to waiting for treatment. For elective patients, this means waiting for an appointment, and previous work has shown that patients attach significant value to reductions in these wait times. I show that trauma and orthopaedic patients would be better off with further wait time reductions, even if that leads to more crowding.

Emergency patients, in contrast, wait for treatment while physically in a hospital emergency department. I show that these waiting times can be very harmful and that by shortening these wait times we can actually save lives.

The third dimension relates to how long a patient spends in hospital recovering from surgery. I show that, at least on the margin of care for trauma and orthopaedic patients, an additional day in hospital has tangible benefits in terms of reducing the likelihood of experiencing an unplanned readmission.

How could your findings be practically employed in the NHS to improve productivity?

I would highlight two areas of my research that speak directly to the policy debate about NHS productivity.

First, while the wait time targets for elective care may have led to some crowding problems and subsequently more readmissions, the net benefit of these targets to trauma and orthopaedic patients is positive. Second, the wait time target for emergency departments also appears to have benefited patients: it saved lives at a reasonably cost-effective rate.

From the perspective of patients, therefore, I would argue these policies have been relatively successful and should be maintained.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On 18-23 June, researchers, coming from Australia, Germany, the Netherlands, and the United Kingdom, were gathered together at the annual CINCH summer school, an academic program for early stage researchers in health economics. The fifth CINCH Academy was held in Essen, Germany, by one of Germany’s leading health economics centres – CINCH. The institute brings together the region’s most notable health economics institutions: RWI – Leibniz Institute for Economic Research, the Faculty of Economics and Business Administration at the University of Duisburg-Essen, and the Institute for Competition Economics (DICE) at the Heinrich-Heine-University in Düsseldorf.

CINCH Academy 2018 Part 1 – Luigi Siciliani (@EconomicsatYork) and the economics of hospitals! pic.twitter.com/bjVRGoxJRJ

— CINCH (@CINCHessen) June 19, 2018

This year the focus of the Academy was hospital economics and mental health. On the first days of the event, Luigi Siciliani (University of York) gave a very informative block of lectures on hospital competition as well as currently often-debated quality of health care, waiting times and patient’s choice. To strengthen the learning process, after each topic, participants were requested to answer a set of questions and engaged in discussions that helped to better understand the lecture materials. After a productive first block of lectures, Richard G. Frank (Harvard University) provided a comprehensive insight into the economics of mental health and emphasized the distinguishing marks of topics in mental health such as salient features of mental illness, the role of government, mental health illness protection and mental health policy. Encouraged by the lecturer and with a high interest, each participant took part in the discussion and shared their knowledge about specific situations and handlings in their home countries.

Today CINCH Academy continues with the economics of mental health taught by Richard G. Frank (@harvardmed & @Kennedy_School) and of course student presentations! pic.twitter.com/snZBauMNLK

— CINCH (@CINCHessen) June 21, 2018

In addition to the educational material, each participant had an opportunity to present his or her current research topic and be discussed by another participant. The large range of topics, such as the influence of crime on residents’ mental wellbeing, the influence of unpaid care on formal care utilization and the impact of increased hospital expenditures on population mortality, created a very interactive atmosphere for discussions. Senior researcher Daniel Howdon (University of Leeds) chaired the paper session and gave additional helpful comments for each presenter.

Apart from an interesting academic program, the summer school further fostered an interaction between participants in several social activities organized by the CINCH team. Besides several dinners after intensive days, participants had a chance to participate in a specially organized city tour in Essen and visit the Zollverein Coal Mine Industrial Complex (Zeche Zollverein) that is inscribed into the UNESCO list of World Heritage Sites. The large industrial monument is often named as the cultural heart of the Ruhr Area. After a guided tour through the complex, all participants once again gathered to have a dinner at a traditional restaurant of this region. Social activities not only allowed to further discuss topics of the lectures but also to share different personal experiences about pursuing a doctoral degree in different countries and about other daily interests for each early-stage researcher such as intensive learning, travelling to conferences, obtaining datasets, etc.

Yesterday we took the CINCH Academy participations to Zeche Zollverein (@ZOLLVEREIN) the landmark of Essen (@visitessen) to learn about the history of Essen and enjoy the sunshine. Afterwards we had some well-deserved "Currywurst Pommes". pic.twitter.com/4t1lwSWG76

— CINCH (@CINCHessen) June 21, 2018

On the last day of the summer school, organizers announced the Best Paper Award, that was awarded to Elizabeth Lemmon (University of Stirling) for her research paper “Utilisation of personal care services in Scotland: the influence of unpaid carers”. Besides the financial reward, her work will be published in the CINCH Working Paper Series.

CINCH Academy was an excellent opportunity to deepen the knowledge and insights in hospital and mental health economics. Our special thanks goes to lecturers, Luigi Siciliani and Richard G. Frank, to paper sessions chair Daniel Howdon, as well as to the great organizational team Christoph Kronenberg and Annika Jäschke.

Credit

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Legal origins and female HIV. American Economic Review [RePEc] Published 13th June 2018

I made this somewhat unusual choice because the author Siwan Anderson draws an important connection between the economic and legal status of women across sub-Saharan Africa and the incidence of HIV. As summarized in the American Economic Review feature Empowering women, improving health, “Over half of all people living with HIV are women. Of all HIV-positive women, 80 percent live in Sub-Saharan Africa.” Anderson hypothesizes that regional differences in female property rights (lower in common law eastern and southern Africa than in civil law central Africa) may explain significantly higher HIV incidence in eastern and southern African women, especially relative to eastern and southern African men. Health economists have long studied how economic status affects access to health care; Anderson presents an important and interesting complementary argument for how economic (and legal) status affects health. In particular, improved legal status and access to legal aid may be a key step in improving women’s health.

Addressing generic-drug market failures — the case for establishing a nonprofit manufacturer. The New England Journal of Medicine [PubMed] Published 17th May 2018

We have recently seen shortages in many generic drugs, including generic injectables used in emergency, trauma and other hospital medicine. In many cases, there is only a single supplier, who can dramatically increase prices. One might expect others to enter the market in this case. However, frequently significant fixed start-up costs pose a barrier to entry and the single supplier, who has already made and in many cases paid for the start-up investment, can drastically reduce prices to make it difficult for the competition to cover these costs. Thus there is little incentive to enter a potentially low-profit market. The authors propose establishing a nonprofit manufacturer, essentially a pharmaceutical counterpart to a variety of national and nonprofit health systems, as a novel and a potentially successful way to address this issue.

An incomplete prescription: President Trump’s plan to address high drug prices. JAMA [PubMed] Published 19th June 2018

The prices of many drugs are significantly higher in the United States than in much of the rest of the developed world. President Trump proposes some market actions such as granting Medicare negotiating power; but the authors find these insufficient, making two interesting additional proposals. First, since much pharmaceutical development derives from NIH funded research (including chimeric antigen receptor T-cell immunotherapies which may cost $400,000 US per dose), the authors argue that the NIH and academic institutions could require US prices based upon independent valuations or not to exceed those in other industrialized countries. The authors also suggest authorizing imports where there is adequate regulation as a further mechanism for controlling drug prices; in my opinion a natural free-trade position. The pricing of pharmaceuticals remains complex and perhaps new economic models are needed to address the risk and cost of pharmaceutical development. Kenneth Arrow’s critiques of the limitations of economics to address health issues might provide interesting insights.

Cost-related insulin underuse is common and associated with poor glycemic control. Diabetes Published July 2018

I would like to conclude by citing a recent abstract providing a human side to the growing cost of pharmaceuticals. Darby Herkert (a Yale undergraduate) reported that a quarter of almost 200 patient responses to a survey of patients at a New Haven, CT, USA diabetes center reported cost-related insulin underuse. Underuse was prevalent among patients with lower income levels, patients without full-time employment, and patients without employer-provided insurance, Medicare or Medicaid. Patients reporting underuse had three times the incidence of of HbA1c >9%. These results cite the human costs of high insulin prices in the US. A Medscape review cites the high cost of typically prescribed insulin analogs, and quotes the lead author calling these prices irrational and describing patients living near the Mexican border crossing the border to buy their insulin.

Credits

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

When we think of the causal effect of living in one neighbourhood compared to another we think of how the social interactions and lifestyle of that area produce better outcomes. Does living in an area with more obese people cause me to become fatter? (Quite possibly). Or, if a family moves to an area where people earn more will they earn more? (Read on).

In a previous post, we discussed such effects in the context of slums, where the synergy of poor water and sanitation, low quality housing, small incomes, and high population density likely has a negative effect on residents’ health. However, we also discussed how difficult it is to estimate neighbourhood effects empirically for a number of reasons. On top of this, are the different ways neighbourhood effects can manifest. Social interactions may mean behaviours that lead to better health or incomes rub off on one another. But also there may be some underlying cause of the group’s, and hence each individual’s, outcomes. In the slum, low education may mean poor hygiene habits spread, or the shared environment may contain pathogens, for example. Both of these pathways may constitute a neighbourhood effect, but both imply very different explanations and potential policy remedies.

What should we make then of, not one, but two new articles by Raj Chetty and Nathaniel Henderen in the recent issue of Quarterly Journal of Economics? Both of which use observational data to estimate neighbourhood effects.

Paper 1: The Impacts of Neighborhoods on Intergenerational Mobility I: Childhood Exposure Effects.

The authors have an impressive data set. They use federal tax records from the US between 1996 and 2012 and identify all children born between 1980 and 1988 and their parents (or parent). For each of these family units they determine household income and then the income of the children when they are older. To summarise a rather long exegesis of the methods used, I’ll try to describe the principle finding in one sentence:

Among families moving between commuting zones in the US, the average income percentile of children at age 26 is 0.04 percentile points higher per year spent and per additional percentile point increase in the average income percentile of the children of permanent residents at age 26 in the destination where the family move to. (Phew!)

They interpret this as the outcomes of in-migrating children ‘converging’ to the outcomes of permanently resident children at a rate of 4% per year. That should provide an idea of how the outcomes and treatments were defined, and who constituted the sample. The paper makes the assumption that the effect is the same regardless of the age of the child. Or to perhaps make it a bit clearer, the claim can be interpreted as that human capital, H, does something like this (ignoring growth over childhood due to schooling etc.):

where ‘good’ and ‘bad’ mean ‘good neighbourhood’ and ‘bad neighbourhood’. This could be called the better neighbourhoods cause you to do better hypothesis.

The analyses also take account of parental income at the time of the move and looks at families who moved due to a natural disaster or other ‘exogenous’ shock. The different analyses generally support the original estimate putting the result in the region of 0.03 to 0.05 percentile points.

But are these neighbourhood effects?

A different way of interpreting these results is that there is an underlying effect driving incomes in each area. Areas with higher incomes for their children in the future are those that have a higher market price for labour in the future. So we could imagine that this is what is going on with human capital instead:

This is the those moving to areas where people will earn more in the future, also earn more in the future because of differences in the labour market hypothesis. The Bureau of Labour Statistics, for example, cites the wage rate for a registered nurse as $22.61 in Iowa and $36.13 in California. But we can’t say from the data whether the children are sorting into different occupations or are getting paid different amounts for the same occupations.

The reflection problem

Manksi (1993) called the issue the ‘reflection problem’, which he described as arising when

a researcher observes the distribution of a behaviour in a population and wishes to infer whether the average behaviour in some group influences the behaviour of the individuals that compose the group.

What we have here is a linear-in-means model estimating the effect of average incomes on individual incomes. But what we cannot distinguish between is the competing explanations of, what Manski called, endogenous effects that result from the interaction  with families with higher incomes, and correlated effects that lead to similar outcomes due to exposure to the same underlying latent forces, i.e. the market. We could also add contextual effects that manifest due to shared group characteristics (e.g. levels of schooling or experience). When we think of a ‘neighbourhood effect’ I tend to think of them as of the endogenous variety, i.e. the direct effects of living in a certain neighbourhood. For example, under different labour market conditions, both my income and the average income of the permanent residents of the neighbourhood I move to might be lower, but not because of the neighbourhood.

The third hypothesis

There’s also the third hypothesis, families that are better off move to better areas (i.e. effects are accounted for by unobserved family differences):

The paper presents lots of modifications to the baseline model, but none of them can provide an exogenous choice of destination. They look at an exogenous cause of moving – natural disasters – and also instrument with the expected difference in income percentiles for parents from the same zip code, but I can’t see how this instrument is valid. Selection bias is acknowledged in the paper but without some exogenous variation in where a family moves to it’ll be difficult to really claim to have identified a causal effect. The choice to move is in the vast majority of family’s cases based on preferences over welfare and well-being, especially income. Indeed, why would a family move to a worse off area unless their circumstances demanded it of them? So in reality, I would imagine the truth would lie somewhere in between these three explanations.

Robust analysis?

As a slight detour, we might want to consider if these are causal effects, even if the underlying assumptions hold. The paper presents a range of analyses to show that the results are robust. But these analyses represent just a handful of those possible. Given that the key finding is relatively small in magnitude, one wonders what would have happened under different scenarios and choices – the so-called garden of forking paths problem. To illustrate, consider some of the choices that were made about the data and models, and all the possible alternative choices. The sample included only those with a mean positive income between 1996 to 2004 and those living in commuter zones with populations of over 250,000 in the 2000 census. Those whose income was missing were assigned a value of zero. Average income over 1996 to 2000 is a proxy for lifetime income. If the marital status of the parents changed then the child was assigned to the mother’s location. Non-filers were coded as single. Income is measured in percentile ranks and not dollar terms. The authors justify each of the choices, but an equally valid analysis would have resulted from different choices and possibly produced very different results.

-o-

Paper 2: The Impacts of Neighborhoods on Intergenerational Mobility II: County-Level Estimates

The strategy of this paper is much like the first one, except that rather than trying to estimate the average effect of moving to higher or lower income areas, they try to estimate the effect of moving to each of 3,000 counties in the US. To do this they assume that the number of years exposure to the county is as good as random after taking account of i) origin fixed effects, ii) parental income percentile, and iii) a quadratic function of birth cohort year and parental income percentile to try and control for some differences in labour market conditions. An even stronger assumption than before! The hierarchical model is estimated using some complex two-step method for ‘computational tractability’ (I’d have just used a Bayesian estimator). There’s some further strange calculations, like conversion from percentile ranks into dollar terms by regressing the dollar amounts on average income ranks and multiplying everything by the coefficient, rather than just estimating the model with dollars as the outcome (I suspect it’s to do with their complicated estimation strategy). Nevertheless, we are presented with some (noisy) county-level estimates of the effect of an additional year spent there in childhood. There is a weak correlation with the income ranks of permanent residents. Again, though, we have the issue of many competing explanations for the observed effects.

The differences in predicted causal effect by county don’t help distinguish between our hypotheses. Consider this figure:

Do children of poorer parents in the Southern states end up with lower human capital and lower-skilled jobs than in the Midwest? Or does the market mean that people get paid less for the same job in the South? Compare the map above to the maps below showing wage rates of two common lower-skilled professions, cashiers (right) or teaching assistants (left):

A similar pattern is seen. While this is obviously just a correlation, one suspects that such variation in wages is not being driven by large differences in human capital generated through personal interaction with higher earning individuals. This is also without taking into account any differences in purchasing power between geographic areas.

What can we conclude?

I’ve only discussed a fraction of the contents of these two enormous papers. The contents could fill many more blog posts to come. But it all hinges on whether we can interpret the results as the average causal effect of a person moving to a given place. Not nearly enough information is given to know whether families moving to areas with lower future incomes are comparable to those with higher future incomes. Also, we could easily imagine a world where the same people were all induced to move to different areas – this might produce completely different sets of neighbourhood effects since they themselves contribute to those effects. But I feel that the greatest issue is the reflection problem. Even random assignment won’t get around this. This is not to discount the value or interest these papers generate, but I can’t help but feel too much time is devoted to trying to convince the reader of a ‘causal effect’. A detailed exploration of the relationships in the data between parental incomes, average incomes, spatial variation, later life outcomes, and so forth, might have been more useful for generating understanding and future analyses. Perhaps sometimes in economics we spend too long obsessing over estimating unconvincing ‘causal effects’ and ‘quasi-experimental’ studies that really aren’t and forget the value of just a good exploration of data with some nice plots.

Image credits:

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Evaluating the 2014 sugar-sweetened beverage tax in Chile: an observational study in urban areas. PLoS Medicine [PubMed] Published 3rd July 2018

Sugar taxes are one of the public health policy options currently in vogue. Countries including Mexico, the UK, South Africa, and Sri Lanka all have sugar taxes. The aim of such levies is to reduce demand for the most sugary drinks, or if the tax is absorbed on the supply side, which is rare, to encourage producers to reduce the sugar content of their drinks. One may also view it as a form of Pigouvian taxation to internalise the public health costs associated with obesity. Chile has long had an ad valorem tax on soft drinks fixed at 13%, but in 2014 decided to pursue a sugar tax approach. Drinks with more than 6.25g/100ml saw their tax rate rise to 18% and the tax on those below this threshold dropped to 10%. To understand what effect this change had, we would want to know three key things along the causal pathway from tax policy to sugar consumption: did people know about the tax change, did prices change, and did consumption behaviour change. On this latter point, we can consider both the overall volume of soft drinks and whether people substituted low sugar for high sugar beverages. Using the Kantar Worldpanel, a household panel survey of purchasing behaviour, this paper examines these questions.

Everyone in Chile was affected by the tax so there is no control group. We must rely on time series variation to identify the effect of the tax. Sometimes, looking at plots of the data reveals a clear step-change when an intervention is introduced (e.g. the plot in this post), not so in this paper. We therefore rely heavily on the results of the model for our inferences, and I have a couple of small gripes with it. First, the model captures household fixed effects, but no consideration is given to dynamic effects. Some households may be more or less likely to buy drinks, but their decisions are also likely to be affected by how much they’ve recently bought. Similarly, the errors may be correlated over time. Ignoring dynamic effects can lead to large biases. Second, the authors choose among different functional form specifications of time using Akaike Information Criterion (AIC). While AIC and the Bayesian Information Criterion (BIC) are often thought to be interchangeable, they are not; AIC estimates predictive performance on future data, while BIC estimates goodness of fit to the data. Thus, I would think BIC would be more appropriate. Additional results show the estimates are very sensitive to the choice of functional form by an order of magnitude and in sign. The authors estimate a fairly substantial decrease of around 22% in the volume of high sugar drinks purchased, but find evidence that the price paid changed very little (~1.5%) and there was little change in other drinks. While the analysis is generally careful and well thought out, I am not wholly convinced by the authors’ conclusions that “Our main estimates suggest a significant, sizeable reduction in the volume of high-tax soft drinks purchased.”

A Bayesian framework for health economic evaluation in studies with missing data. Health Economics [PubMed] Published 3rd July 2018

Missing data is a ubiquitous problem. I’ve never used a data set where no observations were missing and I doubt I’m alone. Despite its pervasiveness, it’s often only afforded an acknowledgement in the discussion or perhaps, in more complete analyses, something like multiple imputation will be used. Indeed, the majority of trials in the top medical journals don’t handle it correctly, if at all. The majority of the methods used for missing data in practice assume the data are ‘missing at random’ (MAR). One interpretation is that this means that, conditional on the observable variables, the probability of data being missing is independent of unobserved factors influencing the outcome. Another interpretation is that the distribution of the potentially missing data does not depend on whether they are actually missing. This interpretation comes from factorising the joint distribution of the outcome and an indicator of whether the datum is observed , along with some covariates , into a conditional and marginal model: , a so-called pattern mixture model. This contrasts with the ‘selection model’ approach: .

This paper considers a Bayesian approach using the pattern mixture model for missing data for health economic evaluation. Specifically, the authors specify a multivariate normal model for the data with an additional term in the mean if it is missing, i.e. the model of . A model is not specified for . If it were then you would typically allow for correlation between the errors in this model and the main outcomes model. But, one could view the additional term in the outcomes model as some function of the error from the observation model somewhat akin to a control function. Instead, this article uses expert elicitation methods to generate a prior distribution for the unobserved terms in the outcomes model. While this is certainly a legitimate way forward in my eyes, I do wonder how specification of a full observation model would affect the results. The approach of this article is useful and they show that it works, and I don’t want to detract from that but, given the lack of literature on missing data in this area, I am curious to compare approaches including selection models. You could even add shared parameter models as an alternative, all of which are feasible. Perhaps an idea for a follow-up study. As a final point, the models run in WinBUGS, but regular readers will know I think Stan is the future for estimating Bayesian models, especially in light of the problems with MCMC we’ve discussed previously. So equivalent Stan code would have been a bonus.

Trade challenges at the World Trade Organization to national noncommunicable disease prevention policies: a thematic document analysis of trade and health policy space. PLoS Medicine [PubMed] Published 26th June 2018

This is an economics blog. But focusing solely on economics papers in these round-ups would mean missing out on some papers from related fields that may provide insight into our own work. Thus I present to you a politics and sociology paper. It is not my field and I can’t give a reliable appraisal of the methods, but the results are of interest. In the global fight against non-communicable diseases, there is a range of policy tools available to governments, including the sugar tax of the paper at the top. The WHO recommends a large number. However, there is ongoing debate about whether trade rules and agreements are used to undermine this public health legislation. One agreement, the Technical Barriers to Trade (TBT) Agreement that World Trade Organization (WTO) members all sign, states that members may not impose ‘unnecessary trade costs’ or barriers to trade, especially if the intended aim of the measure can be achieved without doing so. For example, Philip Morris cited a bilateral trade agreement when it sued the Australian government for introducing plain packaging claiming it violated the terms of trade. Philip Morris eventually lost but not after substantial costs were incurred. In another example, the Thai government were deterred from introducing a traffic light warning system for food after threats of a trade dispute from the US, which cited WTO rules. However, there was no clear evidence on the extent to which trade disputes have undermined public health measures.

This article presents results from a new database of all TBT WTO challenges. Between 1995 and 2016, 93 challenges were raised concerning food, beverage, and tobacco products, the number per year growing over time. The most frequent challenges were over labelling products and then restricted ingredients. The paper presents four case studies, including Indonesia delaying food labelling of fat, sugar, and salt after challenge by several members including the EU, and many members including the EU again and the US objecting to the size and colour of a red STOP sign that Chile wanted to put on products containing high sugar, fat, and salt.

We have previously discussed the politics and political economy around public health policy relating to e-cigarettes, among other things. Understanding the political economy of public health and phenomena like government failure can be as important as understanding markets and market failure in designing effective interventions.

Credits

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

HESG Summer 2018 was hosted by the University of Bristol at the Mercure Bristol Holland House on 20th-22nd June. The organisers did a superb job… the hotel was super comfortable, the food & drink were excellent, and the discussions were enlightening. So the Bristol team can feel satisfied with a job very well done, and one that has certainly set the bar high for the next HESG at York.

Day 1

I started by attending the engaging discussion by Mark Pennington on Tristan Snowsill’s paper on how to use moment-generating functions in cost-effectiveness modelling. Tristan has suggested a new method to model time-dependent disease progression, rather than using multiple tunnel states, or discrete event simulation. I think this could really be a game changer in decision modelling. But for me, the clear challenge will be in explaining the method in a simple way, so that modellers will feel comfortable in trying it out.

It was soon time to take the reins myself and chair the next session. The paper, by Joanna Thorn and colleagues, explored which items should be included in health economic analysis plans (HEAPs), with the discussion being led by David Turner. There was a very lively back-and-forth on the role of HEAPs and their relationship with the study protocol and statistical analysis plan. In my view, this highlighted how HEAPs can be a useful tool to set out the economic analysis, help plan resources and manage expectations from the wider project team.

My third session was the eye-opening discussion of Ian Ross’s paper on time costs of open defecation in India, led by Julius Ohrnberger. It was truly astonishing to learn how prevalent the practice of open defecation is, and the time costs involved to find a suitable location. The impact of which would never have crossed my mind without this fascinating paper.

My last session of the day took in the discussion by Aideen Ahern of the thought-provoking paper by Tessa Peasgood and colleagues on the process of identifying the dimensions that should be included in an instrument to measure health, social care and carer-related quality of life. Having an extended QALY-weight for health and care-related quality of life is almost the holy grail in preference measures. It would allow us to account for the impact of interventions in these two very related areas of quality of life. The challenge is in generating an instrument that it is both generic and sensitive. This extended-QALY weight is still under development at this point, with the next step being to select the final set of dimensions for valuation.

The evening plenary session was on the hot-button topic of “Opportunities and challenges of Brexit for health economists” and included presentations by Paula Lorgelly, Andrew Street and Ellen Rule. We found ourselves jointly commiserating about the numerous challenges that are being posed due to the increased demand of health care and decreased supply of health care professionals. But it wasn’t all doom and gloom fortunately, as Andrew Street suggested that future economic research may use Brexit as an exogenous shock. Clearly this is not enough for comfort but left the room in a positive mood to face dinner!

Day 2

It was time for one of my own papers on day 2, as we started with Nicky Welton discussing the paper by Alessandro Grosso, myself and other colleagues on the structural uncertainties in cost-effectiveness modelling. We were delighted that we received excellent comments that will help to improve our paper. The session also prompted us to think about whether we should separate the model from the structural uncertainty analysis element and create 2 distinct papers. This would allow us to explore and extend the latter even further. So, watch this space!

I attended Matthew Quaife’s discussion next, on the study by Katharina Diernberger and colleagues of expert elicitation to parameterise a cost-effectiveness model. Their expert elicitation had a whopping 47 responses, which allowed the team to explore different ways to aggregate the answers and demonstrate their impact on the results. This paper prompted a quick-fire discussion about how far to push decision modelling if data are scarce. Expert elicitation is often seen as the answer to scarce data but it is no silver bullet! Thanks to this paper, it is clear that the differing views among experts make a difference to the findings.

I continued along the modelling topic with the next session I’d chosen: Tracey Sach’s discussion on Ieva Skarda’s and colleagues excellent paper simulating the long-term consequences of interventions in childhood. The paper prompted a lot of interest regarding the use of the results to inform the extrapolation of trials with a short time duration. The authors are looking at developing a tool to facilitate the use of the model by external researchers, which I’m sure will have a high take-up.

After lunch, I attended Tristan Snowsill’s discussion of Felix Achana and colleagues’ paper on regression models for analysis of clinical trials data. Felix and colleagues propose multivariate generalised linear mixed effects models to account for the centre-specific heterogeneity and simultaneous estimation of the effect on the costs and outcomes. Although the analysis is quite complex, the method has strong potential to be very useful in multinational trials. I was excited to hear that the authors are developing functions in Stata and R, which will make it much easier for analysts to use the method.

Keeping to the cost-effectiveness topic, I then attended Ed Wilson’s discussion on the paper by Laura Flight and colleagues on the risk of bias of adaptive RCTs. The paper discusses how an adaptive trial may be stopped early depending on interim analysis. However, our attention must be drawn to the caveat that conducting multiple interim analysis requires adjustment for bias to inform the economic analysis. This is an opportune paper as we are seeing the use of adaptive trial designs rise, and definitely one I’ll make a note to refer to in the future.

For my final session of the day, I discussed Emma McManus‘s paper on establishing a definition of model replication. Replication has been subject to increased interest by the scientific community but its take-up has been slow in health economics, the exception being cost-effectiveness modelling of diabetes. Well done to Emma and the team for bringing the topic to the forum! The ensuing discussion interestingly revealed that we can often have quite different concepts of what replication is and its role in model validation. The authors are working on replicating published models, so I’m looking forward to hearing more about their experience in future meetings.

Day 3

The last day got off to a strong start when Andrew Street opened with a discussion of Joshua Kraindler and Ben Gershlick‘s study on the impact of capital investment on hospital productivity. The session was both thought-provoking and extremely engaging, with Andrew encouraging our involvement by asking us all to think about the shape of a production function, in order to better interpret the results. This timely discussion was centred around the challenges in measuring capital investment in the NHS, given the paucity of data.

My final session was Francesco Ramponi’s paper on cross-sectoral economic evaluations, discussed by Mandy Maredza. This session was quite a record-breaker for HESG Bristol, enjoying probably the largest audience of the conference. Opportunely, it was able to shine a spotlight on the interest in expanding economic evaluations beyond decisions in health care, and the role of economic evaluations when costs and outcomes relate to different budgets and decision makers.

This HESG, as always, was a testament to the breadth of topics covered by health economists, and their hard work in pushing this important science onward. I’m now very much looking forward to seeing so many interesting papers published, many of which I will certainly use and reflect upon with my own research. Of course, I’m also very much looking forward to the next new batch of new research at the HESG in York. The date is firmly in my diary!

Credit

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Choice in the presence of experts: the role of general practitioners in patients’ hospital choice. Journal of Health Economics [PubMed] [RePEc] Published 26th June 2018

In the UK, patients are in principle free to choose which hospital they use for elective procedures. However, as these choices operate through a GP referral, the extent to which the choice is ‘free’ is limited. The choice set is provided by the GP and thus there are two decision-makers. It’s a classic example of the principal-agent relationship. What’s best for the patient and what’s best for the local health care budget might not align. The focus of this study is on the applied importance of this dynamic and the idea that econometric studies that ignore it – by looking only at patient decision-making or only at GP decision-making – may give bias estimates. The author outlines a two-stage model for the choice process that takes place. Hospital characteristics can affect choices in three ways: i) by only influencing the choice set that the GP presents to the patient, e.g. hospital quality, ii) by only influencing the patient’s choice from the set, e.g. hospital amenities, and iii) by influencing both, e.g. waiting times. The study uses Hospital Episode Statistics for 30,000 hip replacements that took place in 2011/12, referred by 4,721 GPs to 168 hospitals, to examine revealed preferences. The choice set for each patient is not observed, so a key assumption is that all hospitals to which a GP made referrals in the period are included in the choice set presented to patients. The main findings are that both GPs and patients are influenced primarily by distance. GPs are influenced by hospital quality and the budget impact of referrals, while distance and waiting times explain patient choices. For patients, parking spaces seem to be more important than mortality ratios. The results support the notion that patients defer to GPs in assessing quality. In places, it’s difficult to follow what the author did and why they did it. But in essence, the author is looking for (and in most cases finding) reasons not to ignore GPs’ preselection of choice sets when conducting econometric analyses involving patient choice. Econometricians should take note. And policymakers should be asking whether freedom of choice is sensible when patients prioritise parking and when variable GP incentives could give rise to heterogeneous standards of care.

Using evidence from randomised controlled trials in economic models: what information is relevant and is there a minimum amount of sample data required to make decisions? PharmacoEconomics [PubMed] Published 20th June 2018

You’re probably aware of the classic ‘irrelevance of inference’ argument. Statistical significance is irrelevant in deciding whether or not to fund a health technology, because we ought to do whatever we expect to be best on average. This new paper argues the case for irrelevance in other domains, namely multiplicity (e.g. multiple testing) and sample size. With a primer on hypothesis testing, the author sets out the regulatory perspective. Multiplicity inflates the chance of a type I error, so regulators worry about it. That’s why triallists often obsess over primary outcomes (and avoiding multiplicity). But when we build decision models, we rely on all sorts of outcomes from all sorts of studies, and QALYs are never the primary outcome. So what does this mean for reimbursement decision-making? Reimbursement is based on expected net benefit as derived using decision models, which are Bayesian by definition. Within a Bayesian framework of probabilistic sensitivity analysis, data for relevant parameters should never be disregarded on the basis of the status of their collection in a trial, and it is up to the analyst to properly specify a model that properly accounts for the effects of multiplicity and other sources of uncertainty. The author outlines how this operates in three settings: i) estimating treatment effects for rare events, ii) the number of trials available for a meta-analysis, and iii) the estimation of population mean overall survival. It isn’t so much that multiplicity and sample size are irrelevant, as they could inform the analysis, but rather that no data is too weak for a Bayesian analyst.

Life satisfaction, QALYs, and the monetary value of health. Social Science & Medicine [PubMed] Published 18th June 2018

One of this blog’s first ever posts was on the subject of ‘the well-being valuation approach‘ but, to date, I don’t think we’ve ever covered a study in the round-up that uses this method. In essence, the method is about estimating trade-offs between (for example) income and some measure of subjective well-being, or some health condition, in order to estimate the income equivalence for that state. This study attempts to estimate the (Australian) dollar value of QALYs, as measured using the SF-6D. Thus, the study is a rival cousin to the Claxton-esque opportunity cost approach, and a rival sibling to stated preference ‘social value of a QALY’ approaches. The authors are trying to identify a threshold value on the basis of revealed preferences. The analysis is conducted using 14 waves of the Australian HILDA panel, with more than 200,000 person-year responses. A regression model estimates the impact on life satisfaction of income, SF-6D index scores, and the presence of long-term conditions. The authors adopt an instrumental variable approach to try and address the endogeneity of life satisfaction and income, using an indicator of ‘financial worsening’ to approximate an income shock. The estimated value of a QALY is found to be around A$42,000 (~£23,500) over a 2-year period. Over the long-term, it’s higher, at around A$67,000 (~£37,500), because individuals are found to discount money differently to health. The results also demonstrate that individuals are willing to pay around A$2,000 to avoid a long-term condition on top of the value of a QALY. The authors apply their approach to a few examples from the literature to demonstrate the implications of using well-being valuation in the economic evaluation of health care. As with all uses of experienced utility in the health domain, adaptation is a big concern. But a key advantage is that this approach can be easily applied to large sets of survey data, giving powerful results. However, I haven’t quite got my head around how meaningful the results are. SF-6D index values – as used in this study – are generated on the basis of stated preferences. So to what extent are we measuring revealed preferences? And if it’s some combination of stated and revealed preference, how should we interpret willingness to pay values?

Credits

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The Society for Medical Decision Making (SMDM) held their 17th European Conference between 10th and 12th June at the Stadsgehoorzaal in Leiden, the Netherlands. The meeting was chaired by Anne Stiggelbout and Ewout Steyerberg who, along with Uwe Siebert, welcomed us (early) on Monday morning. Some delegates arrived on Sunday for short courses on a range of topics, from modelling in R and causal inference to the psychology of decision making.

Although based in the US, SMDM holds biennial meetings in Europe which are generally attended by delegates from around the world. Around 300 delegates were in attendance at this meeting, travelling from Toronto to Tehran.

The meeting was ‘Patients Included’ and we were introduced to around 10 patients and caregivers on the first morning. They confidently asked questions and gave comments after the presentations and the plenary, sharing their real-world experience to provide context to findings.

There were five ‘oral abstract’ sessions each comprising six presentations in 15 minute slots (10 minutes long with 5 minutes for audience questions). The sessions covered empirical research relating to physician and patient decision-making, and quantitative valuation and evaluation. Popular applied areas were prostate cancer, breast cancer and precision medicine.

Running in parallel to the oral presentations, workshops were dealing with methodological issues relating to health economics, shared decision-making and psychology.

Four poster sessions, conveniently surrounding the refreshment table, attracted delegates in the morning, breaks and lunch. SMDM provides some of the best poster sessions: posters are always of high quality which means poster sessions are always well attended.

Great science being discussed at the Monday poster session of #ESMDM18! #ValueBasedHealthCare #PatientsIncluded @SGZ_Leiden pic.twitter.com/rWobq86QAq

— SMDM (@socmdm) June 11, 2018

One of the highlights of the meeting was the plenary presentation by Sir David Spiegelhalter who spoke about the challenges of communicating benefits and harms (often probabilities) impartially. Sir David gave examples from the UK’s national breast screening programme to show how presenting information can change people’s interpretation of risk. He also drew on examples of ‘nudges’ which may involve providing information in a persuasive rather than informing way in order to manipulate behaviour. Sir David gave us examples of materials which had been redesigned to improve both patients’ and clinicians’ understanding of the information of benefits and harms. The session concluded with a short video about how Ugandan primary school children have reading comic strips to help interpret information and find facts about the benefits and harms of healthcare interventions.

⁦David Spiegelhalter @d_spiegel⁩ presenting at #ESMDM18 on risk: old vs new style communication on cancer screening pic.twitter.com/5o99cnp5Uf

— Ewout Steyerberg (@ESteyerberg) June 11, 2018

The European SMDM meeting was thoroughly enjoyable and very interesting. The standard of oral and poster presentations was very high, and the environment was very friendly and conducive to networking.

The next North American meeting is in Montreal (October 2018) and the next European meeting will be in 2020 (location to be confirmed).

Credits

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

The efficiency of slacking off: evidence from the emergency department. Econometrica [RePEc] Published May 2018

Scheduling workers is a complex task, especially in large organisations such as hospitals. Not only should one consider when different shifts start throughout the day, but also how work is divided up over the course of each shift. Physicians, like anyone else, value their leisure time and want to go home at the end of a shift. Given how they value this leisure time, as the end of a shift approaches physicians may behave differently. This paper explores how doctors in an emergency department behave at ‘end of shift’, in particular looking at whether doctors ‘slack off’ by accepting fewer patients or tasks and also whether they rush to finish those tasks they have. Both cases can introduce inefficiency by either under-using their labour time or using resources too intensively to complete something. Immediately, from the plots of the raw data, it is possible to see a drop in patients ‘accepted’ both close to end of shift and close to the next shift beginning (if there is shift overlap). Most interestingly, after controlling for patient characteristics, time of day, and day of week, there is a decrease in the length of stay of patients accepted closer to the end of shift, which is ‘dose-dependent’ on time to end of shift. There are also marked increases in patient costs, orders, and inpatient admissions in the final hour of the shift. Assuming that only the number of patients assigned and not the type of patient changes over the course of a shift (a somewhat strong assumption despite the additional tests), then this would suggest that doctors are rushing care and potentially providing sub-optimal or inefficient care closer to the end of their shift. The paper goes on to explore optimal scheduling on the basis of the results, among other things, but ultimately shows an interesting, if not unexpected, pattern of physician behaviour. The results relate mainly to efficiency, but it’d be interesting to see how they relate to quality in the form of preventable errors.

Semiparametric estimation of longitudinal medical cost trajectory. Journal of the American Statistical Association Published 19th June 2018

Modern computational and statistical methods have opened up a range of statistical models to estimation hitherto inestimable. This includes complex latent variable structures, non-linear models, and non- and semi-parametric models. Recently we covered the use of splines for semi-parametric modelling in our Method of the Month series. Not that complexity is everything of course, but given this rich toolbox to more faithfully replicate the data generating process, one does wonder why the humble linear model estimated with OLS remains so common. Nevertheless, I digress. This paper addresses the problem of estimating the medical cost trajectory for a given disease from diagnosis to death. There are two key issues: (i) the trajectory is likely to be non-linear with costs probably increasing near death and possibly also be higher immediately after diagnosis (a U-shape), and (ii) we don’t observe the costs of those who die, i.e. there is right-censoring. Such a set-up is also applicable in other cases, for example looking at health outcomes in panel data with informative dropout. The authors model medical costs for each month post-diagnosis and time of censoring (death) by factorising their joint distribution into a marginal model for censoring and a conditional model for medical costs given the censoring time. The likelihood then has contributions from the observed medical costs and their times, and the times of the censored outcomes. We then just need to specify the individual models. For medical costs, they use a multivariate normal with mean function consisting of a bivariate spline of time and time of censoring. The time of censoring is modelled non-parametrically. This setup of the missing data problem is sometimes referred to as a pattern mixing model, in that the outcome is modelled as a mixture density over different populations dying at different times. The authors note another possibility for informative missing data, which was considered not to be estimable for complex non-linear structures, was the shared parameter model (to soon appear in another Method of the Month) that assumes outcomes and dropout are independent conditional on an underlying latent variable. This approach can be more flexible, especially in cases with varying treatment effects. One wonders if the mixed model representation of penalised splines wouldn’t fit nicely in a shared parameter framework and provide at least as good inferences. An idea for a future paper perhaps… Nevertheless, the authors illustrate their method by replicating the well-documented U-shaped costs from the time of diagnosis in patients with stage IV breast cancer.

Do environmental factors drive obesity? Evidence from international graduate students. Health Economics [PubMed] Published 21st June 2018

‘The environment’ can encompass any number of things including social interactions and networks, politics, green space, and pollution. Sometimes referred to as ‘neighbourhood effects’, the impact of the shared environment above and beyond the effect of individual risk factors is of great interest to researchers and policymakers alike. But there are a number of substantive issues that hinder estimation of neighbourhood effects. For example, social stratification into neighbourhoods likely means people living together are similar so it is difficult to compare like with like across neighbourhoods; trying to model neighbourhood choice will also, therefore, remove most of the variation in the data. Similarly, this lack of common support, i.e. overlap, between people from different neighbourhoods means estimated effects are not generalisable across the population. One way of getting around these problems is simply to randomise people to neighbourhoods. As odd as that sounds, that is what occurred in the Moving to Opportunity experiments and others. This paper takes a similar approach in trying to look at neighbourhood effects on the risk of obesity by looking at the effects of international students moving to different locales with different local obesity rates. The key identifying assumption is that the choice to move to different places is conditionally independent of the local obesity rate. This doesn’t seem a strong assumption – I’ve never heard a prospective student ask about the weight of our student body. Some analysis supports this claim. The raw data and some further modelling show a pretty strong and robust relationship between local obesity rates and weight gain of the international students. Given the complexity of the causes and correlates of obesity (see the crazy diagram in this post) it is hard to discern why certain environments contribute to obesity. The paper presents some weak evidence of differences in unhealthy behaviours between high and low obesity places – but this doesn’t quite get at the environmental link, such as whether these behaviours are shared through social networks or perhaps the structure and layout of the urban area, for example. Nevertheless, here is some strong evidence that living in an area where there are obese people means you’re more likely to become obese yourself.

Credits

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Wenjia Zhu who has a PhD from Boston University. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Health plan innovations and health care costs in the commercial health insurance market
Supervisors
Randall P. Ellis, Thomas G. McGuire, Keith M. Ericson
Repository link
https://hdl.handle.net/2144/27355

What kinds of ‘innovations’ did you want to look at in your research, and why?

My dissertation investigated health plan “innovations” for cost containment, in which certain features are designed into health insurance contracts to influence how health care is delivered and utilized. While specifics may vary considerably across health plans, recent “innovations” feature two main strategies for constraining health spending. One is a demand-side strategy, which aims to reduce health care utilization through high cost-sharing on the consumer side. Plans using this strategy include “high-deductible” or “consumer-driven” health plans. The other is a supply-side strategy, in which insurers selectively contract with low-cost providers whom consumers have access to, thereby directing consumers to those low-cost providers. Plans employing this strategy include “narrow network” health plans.

Despite an ongoing debate about whether the demand-side or supply-side strategy is more effective at reducing costs, there is little work to guide this debate due to challenges in causal inference, estimation, and measurement. As a result, the question of cost containment through insurance benefit designs remains largely unresolved. To shed light on this debate, I investigated these two strategies using a large, multiple-employer, multiple-insurer panel dataset which allowed me to address various methodological challenges through the use of modern econometrics tools and novel estimation methods.

How easy was it to access the data that you needed to answer your research questions?

The main data for my dissertation research come from the Truven Analytic’s MarketScan® Commercial Claims and Encounters Database, which contains administrative claims of a quarter of the U.S. population insured through their employment. I was fortunate to access this database through the data supplier’s existing contract with Boston University, and the entire process of accessing the data involved low effort overall.

Occasionally I needed to refine my research questions or find alternative approaches because certain pieces of information were not available in this database and were hard to access elsewhere. For example, in Chapter 1, we did not further examine heterogeneity of plan coverage within plan types because detailed premiums or benefit features of health plans were not observed (Ellis and Zhu 2016). In Chapter 3, I sought out an alternative approach in lieu of the maximum likelihood (ML) method when estimating provider network breadth because provider identifiers were not coded consistently across health plans in my data, precluding the reliable construction of one key element in the ML method.

Your PhD research tackled several methodological challenges. Which was the most difficult to overcome?

In the course of my research, I found myself in constant need of estimating models that require controlling for multiple fixed effects, each of high dimension (something we called “high-dimensional fixed effects”). One example is health care utilization models that control for provider, patient, and county fixed effects. In these models, however, estimation often became computationally infeasible in the presence of large sample sizes and unbalanced panel datasets. Traditional approaches to absorbing fixed effects no longer worked, and the models with billions of data points could barely be handled in Stata even though it provides some convenient user-written commands (e.g. REGHDFE).

This motivated me and my coauthors to devote an entire chapter in my dissertation to looking into this issue. In Chapter 2, we developed a new algorithm that estimates models with multiple high-dimensional fixed effects while accommodating such features as unbalanced panels, instrumental variables, and cluster-robust variance estimation. The key to our approach is an iterative process of sequentially absorbing fixed effects based on the Frisch-Waugh-Lovell Theorem. By writing up our algorithm into a SAS macro that does not require all data to reside in core memory, we can handle datasets of essentially any size.

Did you identify any health plan designs that reduced health care costs?

Certainly. My dissertation shows that health plans that manage care – imposing cost-sharing, requiring gatekeepers, or restricting consumer choice of providers – spent much less (on procedures) compared to comprehensive insurance plans that do not have any of these “care management” elements, even after controlling for patient selection into plan types.

On the other hand, we did not find evidence that either of the new health plan “innovations” – high cost-sharing or narrow networks – particularly saved health care costs compared to Preferred Provider Organizations (PPOs) (Ellis and Zhu 2016). One possibility is that incentives to control one aspect of spending create compensating effects in other aspects. For example, although high-deductible/consumer-driven health plans shift cost responsibility from employers to enrollees, they did not reduce health care spending due to higher provider prices and higher coding intensity. Similarly, while narrow network plans reduced treatment utilization, they did so mostly for the less severely ill, creating the offsetting incentive of up-coding by providers on the remaining sicker patients.

Based on your findings, what would be your first recommendation to policymakers?

To improve the effectiveness of health care cost containment, my first recommendation to policymakers would be to design mechanisms to more effectively monitor and reduce service prices.

My dissertation shows that while tremendous efforts have been made by health plans to design mechanisms to manage health care utilization (e.g., through imposing a higher cost-sharing on consumers) and to direct patients to certain providers (e.g., through selective contracting), overall cost containment, if any, has been rather modest due to insufficient price reductions. For example, we found that high-deductible/consumer-driven health plans had significantly higher average procedure prices than PPOs (Ellis and Zhu 2016). Even for narrow network plans in which insurers selectively contract with providers, we did not find evidence that these plans were successful in keeping low-cost providers. Difficulties of keeping prices down may reflect unbalanced bargaining power between insurers and providers, as well as special challenges in consumers price-shopping in the presence of complex insurance contract designs (Brot-Goldberg et al. 2017).

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview