Loading...

Follow The Academic Health Economists' Blog on Feedspot

Continue with Google
Continue with Facebook
Or

Valid



Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Value of information methods to design a clinical trial in a small population to optimise a health economic utility function. BMC Medical Research Methodology [PubMed] Published 8th February 2018

Statistical significance – whatever you think of it – and the ‘power’ of clinical trials to detect change, is an important decider in clinical decision-making. Trials are designed to be big enough to detect ‘statistically significant’ differences. But in the context of rare diseases, this can be nigh-on impossible. In theory, the required sample size could exceed the size of the whole population. This paper describes an alternative method for determining sample sizes for trials in this context, couched in a value of information framework. Generally speaking, power calculations ignore the ‘value’ or ‘cost’ associated with errors, while a value of information analysis would take this into account and allow accepted error rates to vary accordingly. The starting point for this study is the notion that sample sizes should take into account the size of the population to which the findings will be applicable. As such, sample sizes can be defined on the basis of maximising the expected (societal) utility associated with the conduct of the trial (whether the intervention is approved or not). The authors describe the basis for hypothesis testing within this framework and specify the utility function to be maximised. Honestly, I didn’t completely follow the stats notation in this paper, but that’s OK – the trial statisticians will get it. A case study application is presented from the context of treating children with severe haemophilia A, which demonstrates the potential to optimise utility according to sample size. The key point is that the power is much smaller than would be required by conventional methods and the sample size accordingly reduced. The authors also demonstrate the tendency for the optimal trial sample size to increase with the size of the population. This Bayesian approach at least partly undermines the frequentist basis on which ‘power’ is usually determined. So one issue is whether regulators will accept this as a basis for defining a trial that will determine clinical practice. But then regulators are increasingly willing to allow for special cases, and it seems that the context of rare diseases could be a way-in for Bayesian trial design of this sort.

EQ-5D-5L: smaller steps but a major step change? Health Economics [PubMed] Published 7th February 2018

This editorial was doing the rounds on Twitter last week. European (and Canadian) health economists love talking about the EQ-5D-5L. The editorial features in the edition of Health Economics that hosts the 5L value set for England, which – 2 years on – has finally satisfied the vagaries of academic publication. The authors provide a summary of what’s ‘new’ with the 5L, and why it matters. But we’ve probably all figured that out by now anyway. More interestingly, the editorial points out some remaining concerns with the use of the EQ-5D-5L in England (even if it is way better than the EQ-5D-3L and its 25-year old value set). For example, there is some clustering in the valuations that might reflect bias or problems with the technique and – even if they’re accurate – present difficulties for analysts. And there are also uncertain implications for decision-making that could systematically favour or disfavour particular treatments or groups of patients. On this basis, the authors support NICE’s decision to ‘pause’ and await independent review. I tend to disagree, for reasons that I can’t fit in this round-up, so come back tomorrow for a follow-up blog post.

Factors influencing health-related quality of life in patients with Type 1 diabetes. Health and Quality of Life Outcomes [PubMed] Published 2nd February 2018

Diabetes and its complications can impact upon almost every aspect of a person’s health. It isn’t clear what aspects of health-related quality of life might be amenable to improvement in people with Type 1 diabetes, or which characteristics should be targeted. This study looks at a cohort of trial participants (n=437) and uses regression analyses to determine which factors explain differences in health-related quality of life at baseline, as measured using the EQ-5D-3L. Age, HbA1c, disease duration and being obese all significantly influenced EQ-VAS values, while self-reported mental illness and unemployment status were negatively associated with EQ-5D index scores. People who were unemployed were more likely to report problems in the mobility, self-care, and pain/discomfort domains. There are some minor misinterpretations in the paper (divining a ‘reduction’ in scores from a cross-section, for example). And the use of standard linear regression models is questionable given the nature of EQ-5D-3L index values. But the findings demonstrate the importance of looking beyond the direct consequences of a disease in order to identify the causes of reduced health-related quality of life. Getting people back to work could be more effective than most health care as a means of improving health-related quality of life.

Financial incentives for chronic disease management: results and limitations of 2 randomized clinical trials with New York Medicaid patients. American Journal of Health Promotion [PubMed] Published 1st February 2018

Chronic diseases require (self-)management, but it isn’t always easy to ensure that patients adhere to the medication or lifestyle changes that could improve health outcomes. This study looks at the effectiveness of financial incentives in the context of diabetes and hypertension. The data are drawn from 2 RCTs (n=1879) which, together, considered 3 types of incentive – process-based, outcome-based, or a combination of the two – compared with no financial incentives. Process-based incentives rewarded participants for attending primary care or endocrinologist appointments and filling their prescriptions, up to a maximum of $250. Outcome-based incentives rewarded up to $250 for achieving target reductions in systolic blood pressure or blood glucose levels. The combined arms could receive both rewards up to the same maximum of $250. In short, none of the financial incentives made any real difference. But generally speaking, at 6-month follow-up, the movement was in the right direction, with average blood pressure and blood glucose levels tending to fall in all arms. It’s not often that authors include the word ‘limitations’ in the title of a paper, but it’s the limitations that are most interesting here. One key difficulty is that most of the participants had relatively acceptable levels of the target outcomes at baseline, meaning that they may already have been managing their disease well and there may not have been much room for improvement. It would be easy to interpret these findings as showing that – generally speaking – financial incentives aren’t effective. But the study is more useful as a way of demonstrating the circumstances in which we can expect financial incentives to be ineffective, and support a better-informed targeting for future programmes.

Credits

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

On the third Thursday of every month, we speak to a recent graduate about their thesis and their studies. This month’s guest is Dr Francesco Longo who has a PhD from the University of York. If you would like to suggest a candidate for an upcoming Thesis Thursday, get in touch.

Title
Essays on hospital performance in England
Supervisor
Luigi Siciliani
Repository link
http://etheses.whiterose.ac.uk/18975/

What do you mean by ‘hospital performance’, and how is it measured?

The concept of performance in the healthcare sector covers a number of dimensions including responsiveness, affordability, accessibility, quality, and efficiency. A PhD does not normally provide enough time to investigate all these aspects and, hence, my thesis mostly focuses on quality and efficiency in the hospital sector. The concept of quality or efficiency of a hospital is also surprisingly broad and, as a consequence, perfect quality and efficiency measures do not exist. For example, mortality and readmissions are good clinical quality measures but the majority of hospital patients do not die and are not readmitted. How well does the hospital treat these patients? Similarly for efficiency: knowing that a hospital is more efficient because it now has lower costs is essential, but how is that hospital actually reducing costs? My thesis tries to answer also these questions by analysing various quality and efficiency indicators. For example, Chapter 3 uses quality measures such as overall and condition-specific mortality, overall readmissions, and patient-reported outcomes for hip replacement. It also uses efficiency indicators such as bed occupancy, cancelled elective operations, and cost indexes. Chapter 4 analyses additional efficiency indicators, such as admissions per bed, the proportion of day cases, and proportion of untouched meals.

You dedicated a lot of effort to comparing specialist and general hospitals. Why is this important?

The first part of my thesis focuses on specialisation, i.e. an organisational form which is supposed to generate greater efficiency, quality, and responsiveness but not necessarily lower costs. Some evidence from the US suggests that orthopaedic and surgical hospitals had 20 percent higher inpatient costs because of, for example, higher staffing levels and better quality of care. In the English NHS, specialist hospitals play an important role because they deliver high proportions of specialised services, commonly low-volume but high-cost treatments for patients with complex and rare conditions. Specialist hospitals, therefore, allow the achievement of a critical mass of clinical expertise to ensure patients receive specialised treatments that produce better health outcomes. More precisely, my thesis focuses on specialist orthopaedic hospitals which, for instance, provide 90% of bone and soft tissue sarcomas surgeries, and 50% of scoliosis treatments. It is therefore important to investigate the financial viability of specialist orthopaedic hospitals relative to general hospitals that undertake similar activities, under the current payment system. The thesis implements weighted least square regressions to compare profit margins between specialist and general hospitals. Specialist orthopaedic hospitals are found to have lower profit margins, which are explained by patient characteristics such as age and severity. This means that, under the current payment system, providers that generally attract more complex patients such as specialist orthopaedic hospitals may be financially disadvantaged.

In what way is your analysis of competition in the NHS distinct from that of previous studies?

The second part of my thesis investigates the effect of competition on quality and efficiency under two different perspectives. First, it explores whether under competitive pressures neighbouring hospitals strategically interact in quality and efficiency, i.e. whether a hospital’s quality and efficiency respond to neighbouring hospitals’ quality and efficiency. Previous studies on English hospitals analyse strategic interactions only in quality and they employ cross-sectional spatial econometric models. Instead, my thesis uses panel spatial econometric models and a cross-sectional IV model in order to make causal statements about the existence of strategic interactions among rival hospitals. Second, the thesis examines the direct effect of hospital competition on efficiency. The previous empirical literature has studied this topic by focusing on two measures of efficiency such as unit costs and length of stay measured at the aggregate level or for a specific procedure (hip and knee replacement). My thesis provides a richer analysis by examining a wider range of efficiency dimensions. It combines a difference-in-difference strategy, commonly used in the literature, with Seemingly Unrelated Regression models to estimate the effect of competition on efficiency and enhance the precision of the estimates. Moreover, the thesis tests whether the effect of competition varies for more or less efficient hospitals using an unconditional quantile regression approach.

Where should researchers turn next to help policymakers understand hospital performance?

Hospitals are complex organisations and the idea of performance within this context is multifaceted. Even when we focus on a single performance dimension such as quality or efficiency, it is difficult to identify a measure that could work as a comprehensive proxy. It is therefore important to decompose as much as possible the analysis by exploring indicators capturing complementary aspects of the performance dimension of interest. This practice is likely to generate findings that are readily interpretable by policymakers. For instance, some results from my thesis suggest that hospital competition improves efficiency by reducing admissions per bed. Such an effect is driven by a reduction in the number of beds rather than an increase in the number of admissions. In addition, competition improves efficiency by pushing hospitals to increase the proportion of day cases. These findings may help to explain why other studies in the literature find that competition decreases length of stay: hospitals may replace elective patients, who occupy hospital beds for one or more nights, with day case patients, who are instead likely to be discharged the same day of admission.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Tuskegee and the health of black men. The Quarterly Journal of Economics [RePEc] Published February 2018

In 1932, a study often considered the most infamous and potentially most unethical in U.S. medical history began. Researchers in Alabama enrolled impoverished black men in a research program designed to examine the effects of syphilis under the guise of receiving government-funded health care. The study was known as the Tuskegee syphilis experiment. For 40 years the research subjects were not informed they had syphilis nor were they treated, even after penicillin was shown to be effective. The study was terminated in 1972 after its details were leaked to the press; numerous men died, 40 wives contracted syphilis, and a number of children were born with congenital syphilis. It is no surprise then that there is distrust among African Americans in the medical system. The aim of this article is to examine whether the distrust engendered by the Tuskegee study could have contributed to the significant differences in health outcomes between black males and other groups. To derive a causal estimate the study makes use of a number of differences: black vs non-black, for obvious reasons; male vs female, since the study targeted males, and also since women were more likely to have had contact with and hence higher trust in the medical system; before vs after; and geographic differences, since proximity to the location of the study may be informative about trust in the local health care facilities. A wide variety of further checks reinforce the conclusions that the study led to a reduction in health care utilisation among black men of around 20%. The effect is particularly pronounced in those with low education and income. Beyond elucidating the indirect harms caused by this most heinous of studies, it illustrates the importance of trust in mediating the effectiveness of public institutions. Poor reputations caused by negligence and malpractice can spread far and wide – the mid-Staffordshire hospital scandal may be just such an example.

The economic consequences of hospital admissions. American Economic Review [RePEc] Published February 2018

That this paper’s title recalls that of Keynes’s book The Economic Consequences of the Peace is to my mind no mistake. Keynes argued that a generous and equitable post-war settlement was required to ensure peace and economic well-being in Europe. The slow ‘economic privation’ driven by the punitive measures and imposed austerity of the Treaty of Versailles would lead to crisis. Keynes was evidently highly critical of the conference that led to the Treaty and resigned in protest before its end. But what does this have to do with hospital admissions? Using an ‘event study’ approach – in essence regressing the outcome of interest on covariates including indicators of time relative to an event – the paper examines the impact hospital admissions have on a range of economic outcomes. The authors find that for insured non-elderly adults “hospital admissions increase out-of-pocket medical spending, unpaid medical bills, and bankruptcy, and reduce earnings, income, access to credit, and consumer borrowing.” Similarly, they estimate that hospital admissions among this same group are responsible for around 4% of bankruptcies annually. These losses are often not insured, but they note that in a number of European countries the social welfare system does provide assistance for lost wages in the event of hospital admission. Certainly, this could be construed as economic privation brought about by a lack of generosity of the state. Nevertheless, it also reinforces the fact that negative health shocks can have adverse consequences through a person’s life beyond those directly caused by the need for medical care.

Is health care infected by Baumol’s cost disease? Test of a new model. Health Economics [PubMed] [RePEc] Published 9th February 2018

A few years ago we discussed Baumol’s theory of the ‘cost disease’ and an empirical study trying to identify it. In brief, the theory supposes that spending on health care (and other labour-intensive or creative industries) as a proportion of GDP increases, at least in part, because these sectors experience the least productivity growth. Productivity increases the fastest in sectors like manufacturing and remuneration increases as a result. However, this would lead to wages in the most productive sectors outstripping those in the ‘stagnant’ sectors. For example, salaries for doctors would end up being less than those for low-skilled factory work. Wages, therefore, increase in the stagnant sectors despite a lack of productivity growth. The consequence of all this is that as GDP grows, the proportion spent on stagnant sectors increases, but importantly the absolute amount spent on the productive sectors does not decrease. The share of the pie gets bigger but the pie is growing at least as fast, as it were. To test this, this article starts with a theoretic two-sector model to develop some testable predictions. In particular, the authors posit that the cost disease implies: (i) productivity is related to the share of labour in the health sector, and (ii) productivity is related to the ratio of prices in the health and non-health sectors. Using data from 28 OECD countries between 1995 and 2016 as well as further data on US industry group, they find no evidence to support these predictions, nor others generated by their model. One reason for this could be that wages in the last ten years or more have not risen in line with productivity in manufacturing or other ‘productive’ sectors, or that productivity has indeed increased as fast as the rest of the economy in the health care sector. Indeed, we have discussed productivity growth in the health sector in England and Wales previously. The cost disease may well then not be a cause of rising health care costs – nevertheless, health care need is rising and we should still expect costs to rise concordantly.

Credits

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Once a month we discuss a particular research method that may be of interest to people working in health economics. We’ll consider widely used key methodologies, as well as more novel approaches. Our reviews are not designed to be comprehensive but provide an introduction to the method, its underlying principles, some applied examples, and where to find out more. If you’d like to write a post for this series, get in touch. This month’s method is Q methodology.

Principles

There are many situations when we might be interested in people’s views, opinions or beliefs about an issue, such as how we allocate health care resources or the type of care we provide to dementia patients. Typically, health economists may think about using qualitative methods or preference elicitation techniques, but Q methodology could be your new method to examine these questions. Q methodology combines qualitative and quantitative techniques which allow us to first identify the range of the views that exist on a topic and then describe in-depth those viewpoints.

Q methodology was conceived as a way to study subjectivity by William Stephenson and is detailed in his 1953 book The Study of Behaviour. A more widely available book by Watts and Stenner (2012) provides a great general introduction to all stages of a Q study and the paper by Baker et al (2006) introduces Q methodology in health economics.

Implementation

There are two main stages in a Q methodology study. In the first stage, participants express their views through the rank-ordering of a set of statements known as the Q sort. The second stage uses factor analysis to identify patterns of similarity between the Q sorts, which can then be described in detail.

Stage 1: Developing the statements and Q sorting

The most important part of any Q study is the development of the statements that your participants will rank-order. The starting point is to identify all of the possible views on your topic. Participants should be able to interpret the statements as opinion rather than facts, for example, “The amount of health care people have had in the past should not influence access to treatments in the future”. The statements can come from a range of sources including interview transcripts, public consultations, academic literature, newspapers and social media. Through a process of eliminating duplicates, merging and deleting similar statements, you want to end up with a smaller set of statements that is representative of the population of views that exist on your topic. Pilot these statements in a small number of Q sorts before finalising and starting your main data collection.

The next thing to consider is from whom you are going to collect Q sorts. Participant sampling in Q methodology is similar to that of qualitative methods where you are looking to identify ‘data rich’ participants. It is not about representativeness according to demographics; instead, you want to include participants who have strong and differing views on your topic. Typically this would be around 30 to 60 people. Once you have selected your sample you can conduct your Q sorts. Here, each of your participants rank-orders the set of statements according to an instruction, for example from ‘most agree to most disagree’ or ‘highest priority to lowest priority’. At the end of each Q sort, a short interview is conducted asking participants to summarise their opinions on the Q sort and give further explanation for the placing of selected statements.

Stage 2: Analysis and interpretation

In the analysis stage, the aim is to identify people who have ranked their statements in a similar way. This involves calculating the correlations between the participants Q sorts (the full ranking of all statements) to form a correlation matrix which is then subject to factor analysis. The software outlined in the next section can help you with this. The factor analysis will produce a number of statistically significant solutions and your role as the analyst is to decide how many factors you retain for interpretation. This will be an iterative process where you consider the internal coherence of each factor: i.e. does the ranking of the statements make sense, does it align with the comments made by the participants following the Q sort as well as statistical considerations like Eigen Values. The factors are idealised Q sorts that are a complete ranking of all statements, essentially representing the way a respondent who had a correlation coefficient of 1 with the factor would have ranked their statements. The final step is to provide a descriptive account of the factors, looking at the positioning of each statement in relation to the other statements and drawing on the post Q sort interviews to support and aid your interpretation.

Software

There are a small number of software packages available to analyse your Q data, most of which are free to use. The most widely used programme is PQMethod. It is a DOS-based programme which often causes nervousness for newcomers due to the old school black screen and the requirement to step away from the mouse, but it is actually easy to navigate when you get going and it produces all of the output you need to interpret your Q sorts. There is the newer (and also free) KenQ that is receiving good reviews and has a more up-to-date web-based navigation, but I must confess I like my old time PQMethod. Details on all of the software and where to access these can be found on the Q methodology website.

Applications

Q methodology studies have been conducted with patient groups and the general public. In patient groups, the aim is often to understand their views on the type of care they receive or options for future care. Examples include the views of young people on the transition from paediatric to adult health care services and the views of dementia patients and their carers on good end of life care. The results of these types of Q studies have been used to inform the design of new interventions or to provide attributes for future preference elicitation studies.

We have also used Q methodology to investigate the views of the general public in a range of European countries on the principles that should underlie health care resource allocation as part of the EuroVaQ project. More recently, Q methodology has been used to identify societal views on the provision of life-extending treatments for people with a terminal illness.  This programme of work highlighted three viewpoints and a connected survey found that there was not one dominant viewpoint. This may help to explain why – after a number of preference elicitation studies in this area – we still cannot provide a definitive answer on whether an end of life premium exists. The survey mentioned in the end of life work refers to the Q2S (Q to survey) approach, which is a linked method to Q methodology… but that is for another blog post!

Credit

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Authors of a paper entitled What can health psychologists learn from health economics: from monetary incentives to policy programmes note that they

…believe that health psychologists would benefit from greater familiarisation with the methodologies, theories, and tools of economics”.

I wonder what discourse a research paper examining what health economists could learn from health psychologists or health behaviour change would take; particularly in the area of intervention design and evaluation. Intervention design and evaluation is an area in which multidisciplinary research has become the name of the game! Over the last decade, the intersection of economics and psychology has become even more apparent. The announcement of the 2017 Nobel Memorial Prize for economics, to Richard Thaler, for his groundbreaking work incorporating psychology into economic theory, was a victory not only for the Professor but also for behaviourally-informed policy worldwide.

A workshop

As the Intensive Follow-Up Workshop on Designing Effective Interventions for Health Behaviour Change approaches, I reflect back on the introductory workshop that I attended at the Health Behaviour Change Research Group (HBCRG), NUI Galway. The aim of that workshop was for participants to learn about, and practice using, emerging methods for designing and evaluating behavioural interventions. It was delivered by Dr Molly Byrne, Dr Jenny McSharry and Milou Fredrix.

My background is in health economics. I currently work in a large multidisciplinary team (we call ourselves the CHErIsH team!); a melting pot of health behaviour change, health psychology, public health, health economics and general practice. The different elements of our disciplines are “melting together” into a harmonious common goal of developing an intervention for childhood obesity prevention. I am extremely fortunate to have learnt a lot working with this team. I have been exposed to different methodologies, theories and frameworks in behaviour change and health psychology which has greatly enriched my health economics research thinking. Prior to becoming part of the CHErIsH team, I had little to no experience of working in behaviour change or health psychology. Admittedly throughout my PhD, my thoughts on health behaviour change and psychology models and frameworks wandered from imagining them to be very complex with almost an overload of frameworks and theories – to the other extreme of thinking this area to be a fairly unassuming area of research and that behaviour change could be easily evaluated.

It is safe to say that I left the introductory workshop with the confirmation that behaviour change is a complex area, but that this complexity is OK! The workshop fuelled me with a wealth of knowledge, reassurance and confidence regarding how I might apply behaviour change theoretical models to inform my health economics research. I also left the workshop with a greater understanding of the emerging methods for evaluating behaviour change interventions (which coincidentally I will be doing very shortly).

Whilst this blog post does not cover all of the material outlined in the workshop, below are some nuggets of information that I took home with me.

The big picture

What we do in our behaviours are hugely predictive of morbidity and mortality. Justify the importance of health behaviour in the particular setting that you are researching. This is important in health economic evaluations when we think of health outcomes and how behaviour might impact on these. Or indeed within the less traditional methodologies in health economics such as discrete choice experiments examining individual preferences and behaviour.

When designing a behavioural intervention…
  1. Identify and define the specific behaviour that you are trying to change, this is as good as having a good research question or a good systematic review question. Be aware of the spillover effects that the intervention might have, along with thinking how this intervention might be measured. The more precisely you specify the behaviour, the better!
  2. Understand the psychological theory – why a particular person or group is behaving a certain way in a particular setting. Understand how you can change it.
  3. Specify clearly what the intervention is in your research paper, describe it – clearly. There is a greater need for specificity. There is loose terminology in behaviour change, which often has more complex interventions than biomedical subjects. There is a real need for common language, but try to be clear when describing your intervention. Don’t make it hard on those trying to evaluate it later.
Take home messages

Behaviour change is complex. But, complexity is not a problem once we are aware and acknowledge the complexity that exists.

I previously mentioned that I left the workshop reassured. I was reassured to be reminded by the experts that it is OK not to know exactly what your intervention is going to be or going to look like from the onset.

Last, but by no means least, this workshop provided me with access to the psychological and health behaviour change frameworks. There are in fact 83 theories! Molly and Jenny spoke about “on the ground reality”. So whilst theoretically there are all of these theories, pragmatically the intervention needs to be designed and tested. So a pilot should be about optimising the components of the intervention and testing which components are more feasible.

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Cost-effectiveness analysis of germ-line BRCA testing in women with breast cancer and cascade testing in family members of mutation carriers. Genetics in Medicine [PubMed] Published 4th January 2018

The idea of testing women for BRCA mutations – faulty genes that can increase the probability and severity of breast and ovarian cancers – periodically makes it into the headlines. That’s not just because of Angelina Jolie. It’s also because it’s a challenging and active area of research with many uncertainties. This new cost-effectiveness analysis evaluates a programme that incorporates cascade testing; testing relatives of mutation carriers. The idea is that this could increase the effectiveness of the programme with a reduced cost-per-identification, as relatives of mutation carriers are more likely to also carry a mutation. The researchers use a cohort-based Markov-style decision analytic model. A programme with three test cohorts – i) women with unilateral breast cancer and a risk prediction score >10%, ii) first-degree relatives, and iii) second-degree relatives – was compared against no testing. A positive result in the original high-risk individual leads to testing in the first- and second-degree relatives, with the number of subsequent tests occurring in the model determined by assumptions about family size. Women who test positive can receive risk-reducing mastectomy and/or bilateral salpingo-oophorectomy (removal of the ovaries). The results are favourable to the BRCA testing programme, at $19,000 (Australian) per QALY for testing affected women only and $15,000 when the cascade testing of family members was included, with high probabilities of cost-effectiveness at $50,000 per QALY. I’m a little confused by the model. The model includes the states ‘BRCA positive’ and ‘Breast cancer’, which clearly are not mutually exclusive. And It isn’t clear how women entering the model with breast cancer go on to enjoy QALY benefits compared to the no-test group. I’m definitely not comfortable with the assumption that there is no disutility associated with risk-reducing surgery. I also can’t see where the cost of identifying the high-risk women in the first place was accounted for. But this is a model, after all. The findings appear to be robust to a variety of sensitivity analyses. Part of the value of testing lies in the information it provides about people beyond the individual patient. Clearly, if we want to evaluate the true value of testing then this needs to be taken into account.

Economic evaluation of direct-acting antivirals for hepatitis C in Norway. PharmacoEconomics Published 2nd February 2018

Direct-acting antivirals (DAAs) are those new drugs that gave NICE a headache a few years back because they were – despite being very effective and high-value – unaffordable. DAAs are essentially curative, which means that they can reduce resource use over a long time horizon. This makes cost-effectiveness analysis in this context challenging. In this new study, the authors conduct an economic evaluation of DAAs compared with the previous class of treatment, in the Norwegian context. Importantly, the researchers sought to take into account the rebates that have been agreed in Norway, which mean that the prices are effectively reduced by up to 50%. There are now lots of different DAAs available. Furthermore, hepatitis C infection corresponds to several different genotypes. This means that there is a need to identify which treatments are most (cost-)effective for which groups of patients; this isn’t simply a matter of A vs B. The authors use a previously developed model that incorporates projections of the disease up to 2030, though the authors extrapolate to a 100-year time horizon. The paper presents cost-effectiveness acceptability frontiers for each of genotypes 1, 2, and 3, clearly demonstrating which medicines are the most likely to be cost-effective at given willingness-to-pay thresholds. For all three genotypes, at least one of the DAA options is most likely to be cost-effective above a threshold of €70,000 per QALY (which is apparently recommended in Norway). The model predicts that if everyone received the most cost-effective strategy then Norway would expect to see around 180 hepatitis C patients in 2030 instead of the 300-400 seen in the last six years. The study also presents the price rebates that would be necessary to make currently sub-optimal medicines cost-effective. The model isn’t that generalisable. It’s very much Norway-specific as it reflects the country’s treatment guidelines. It also only looks at people who inject drugs – a sub-population whose importance can vary a lot from one country to the next. I expect this will be a valuable piece of work for Norway, but it strikes me as odd that “affordability” or “budget impact” aren’t even mentioned in the paper.

Cost-effectiveness of prostate cancer screening: a systematic review of decision-analytical models. BMC Cancer [PubMed] Published 18th January 2018

You may have seen prostate cancer in the headlines last week. Despite the number of people in the UK dying each year from prostate cancer now being greater than the number of people dying from breast cancer, prostate cancer screening remains controversial. This is because over-detection and over-treatment are common and harmful. Plenty of cost-effectiveness studies have been conducted in the context of detecting and treating prostate cancer. But there are various ways of modelling the problem and various specifications of screening programme that can be evaluated. So here we have a systematic review of cost-effectiveness models evaluating prostate-specific antigen (PSA) blood tests as a basis for screening. From a haul of 1010 studies, 10 made it into the review. The studies modelled lots of different scenarios, with alternative screening strategies, PSA thresholds, and treatment pathways. The results are not consistent. Many of the scenarios evaluated in the studies were more costly and less effective than current practice (which tended to be the lack of any formal screening programme). None of the UK-based cost-per-QALY estimates favoured screening. The authors summarise the methodological choices made in each study and consider the extent to which this relates to the pathways being modelled. They also specify the health state utility values used in the models. This will be a very useful reference point for anyone trying their hand at a prostate cancer screening model. Of the ten studies included in the review, four of them found at least one screening programme to be potentially cost-effective. ‘Adaptive screening’ – whereby individuals’ recall to screening was based on their risk – was considered in two studies using patient-level simulations. The authors suggest that cohort-level modelling could be sufficient where screening is not determined by individual risk level. There are also warnings against inappropriate definition of the comparator, which is likely to be opportunistic screening rather than a complete absence of screening. Generally speaking, a lack of good data seems to be part of the explanation for the inconsistency in the findings. It could be some time before we have a clearer understanding of how to implement a cost-effective screening programme for prostate cancer.

Credits

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is “end of life” a special case? Connecting Q with survey methods to measure societal support for views on the value of life-extending treatments. Health Economics [PubMed] Published 19th January 2018

Should end-of-life care be treated differently? A question often asked and previously discussed on this blog: findings to date are equivocal. This question is important given NICE’s End-of-Life Guidance for increased QALY thresholds for life-extending interventions, and additionally the Cancer Drugs Fund (CDF). This week’s round-up sees Helen Mason and colleagues attempt to inform the debate around societal support for views of end-of-life care, by trying to determine the degree of support for different views on the value of life-extending treatment. It’s always a treat to see papers grounded in qualitative research in the big health economics journals and this month saw the use of a particularly novel mixed methods approach adding a quantitative element to their previous qualitative findings. They combined the novel (but increasingly recognisable thanks to the Glasgow team) Q methodology with survey techniques to examine the relative strength of views on end-of-life care that they had formulated in a previous Q methodology study. Their previous research had found that there are three prevalent viewpoints on the value of life-extending treatment: 1. ‘a population perspective: value for money, no special cases’, 2. ‘life is precious: valuing life-extension and patient choice’, 3. ‘valuing wider benefits and opportunity cost: the quality of life and death’. This paper used a large Q-based survey design (n=4902) to identify societal support for the three different viewpoints. Viewpoints 1 and 2 were found to be dominant, whilst there was little support for viewpoint 3. The two supported viewpoints are not complimentary: they represent the ethical divide between the utilitarian with a fixed budget (view 1), and the perspective based on entitlement to healthcare (view 2: which implies an expanding healthcare budget in practice). I suspect most health economists will fall into camp number one. In terms of informing decision making, this is very helpful, yet unhelpful: there is no clear answer. It is, however, useful for decision makers in providing evidence to balance the oft-repeated ‘end of life is special’ argument based solely on conjecture, and not evidence (disclosure: I have almost certainly made this argument before). Neither of the dominant viewpoints supports NICE’s End of Life Guidance nor the CDF. Viewpoint 1 suggests end of life interventions should be treated the same as others, whilst viewpoint 2 suggests that treatments should be provided if the patient chooses them; it does not make end of life a special case as this viewpoint believes all treatments should be available if people wish to have them (and we should expand budgets accordingly). Should end of life care be treated differently? Well, it depends on who you ask.

A systematic review and meta-analysis of childhood health utilities. Medical Decision Making [PubMed] Published 7th October 2017

If you’re working on an economic evaluation of an intervention targeting children then you are going to be thankful for this paper. The purpose of the paper was to create a compendium of utility values for childhood conditions. A systematic review was conducted which identified a whopping 26,634 papers after deduplication – sincere sympathy to those who had to do the abstract screening. Following abstract screening, data were extracted for the remaining 272 papers. In total, 3,414 utility values were included when all subgroups were considered – this covered all ICD-10 chapters relevant to child health. When considering only the ‘main study’ samples, 1,191 utility values were recorded and these are helpfully separated by health condition, and methodological characteristics. In short, the authors have successfully built a vast catalogue of child utility values (and distributions) for use in future economic evaluations. They didn’t, however, stop there, they then built on the systematic review results by conducting a meta-analysis to i) estimate health utility decrements for each condition category compared to general population health, and ii) to examine how methodological factors impact child utility values. Interestingly for those conducting research in children, they found that parental proxy values were associated with an overestimation of values. There is a lot to unpack in this paper and a lot of appendices and supplementary materials are included (including the excel database for all 3,414 subsamples of health utilities). I’m sure this will be a valuable resource in future for health economic researchers working in the childhood context. As far as MSc dissertation projects go, this is a very impressive contribution.

Estimating a cost-effectiveness threshold for the Spanish NHS. Health Economics [PubMed] [RePEc] Published 28th December 2017

In the UK, the cost-per-QALY threshold is long-established, although whether it is the ‘correct’ value is fiercely debated. Likewise in Spain, there is a commonly cited threshold value of €30,000 per QALY with a dearth of empirical justification. This paper sought to identify a cost-per-QALY threshold for the Spanish National Health Service (SNHS) by estimating the marginal cost per QALY at which the SNHS currently operates on average. This was achieved by exploiting data on 17 regional health services between the years 2008-2012 when the health budget experienced considerable cuts due to the global economic crisis. This paper uses econometric models based on the provoking work by Claxton et al in the UK (see the full paper if you’re interested in the model specification) to achieve this. Variations between Spanish regions over time allowed the authors to estimate the impact of health spending on outcomes (measured as quality-adjusted life expectancy); this was then translated into a cost-per-QALY value for the SNHS. The headline figures derived from the analysis give a threshold between €22,000 and €25,000 per QALY. This is substantially below the commonly cited threshold of €30,000 per QALY. There are, however (as to be expected) various limitations acknowledged by the authors, which means we should not take this threshold as set in stone. However, unlike the status quo, there is empirical evidence backing this threshold and it should stimulate further research and discussion about whether such a change should be implemented.

Credits

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Every Monday our authors provide a round-up of some of the most recently published peer reviewed articles from the field. We don’t cover everything, or even what’s most important – just a few papers that have interested the author. Visit our Resources page for links to more journals or follow the HealthEconBot. If you’d like to write one of our weekly journal round-ups, get in touch.

Is retirement good for men’s health? Evidence using a change in the retirement age in Israel. Journal of Health Economics [PubMed] Published January 2018

This article is a tour de force from one chapter of a recently completed dissertation from the Hebrew University of Jerusalem. The article focuses on answering the question of what are the health implications of extending working years for older adults. As many countries are faced with critical decisions on how to adjust labor policies to solve rising pension costs (or in the case of the U.S., Social Security insolvency) in the face of aging populations, one obvious potential solution is to change the retirement age. Most OECD countries appear to have retirement ages in the mid-60’s with a number of countries on track to increase that threshold. Israel is one of these countries, having changed their retirement age for men from age 65 to age 67 in 2004. The author capitalizes on this exogenous change in retirement incentives, as workers will be incentivized to keep working to receive full pension benefits, to measure the causal effect of working in these later years, compared to retiring. As the relationship between employment and health is complicated by the endogenous nature of the decision to work, there is a growing literature that has attempted to deal with this endogeneity in different ways. Shai details the conflicting findings in this literature and describes various shortcomings of methods used. He helpfully categorizes studies into those that compare health between retirees and non-retirees (does not deal with selection problem), those that use variation in retirement age across countries (retirement ages could be correlated with individual health across countries), those that exploit variation in specific sector retirement ages (problem of generalizing to population), and those that use age-specific retirement eligibility (health may deteriorate at specific age regardless of eligibility for retirement). As this empirical question has amounted conflicting evidence, the author suggests that his methodology is an improvement on prior papers. He uses a difference-in-difference model that estimates the impact on various health outcomes, before and after the law change, comparing those aged 65-66 years after 2004 with both older and younger cohorts unaffected by the law. The assumption is that any differences in measured health between the age 65-66 group and the comparison group are a result of the extended work in later years. There are several different datasets used in the study and quite a number of analyses that attempt to assuage threats to a causal interpretation of results. Overall, results are that delaying the retirement age has a negative effect on individual health. The size of the effect found is in the ballpark of 1 standard deviation; outcome measures included a severe morbidity index, a poor health index, and the number of physician visits. In addition, these impacts were stronger for individuals with lower levels of education, which the author relates to more physically demanding jobs. Counterfactuals, for example number of dentist visits, which are not expected to be related to employment, are not found to be statistically different. Furthermore, there are non-trivial estimated effects on health care expenditures that are positive for the delayed retirement group. The author suggests that all of these findings are important pieces of evidence in retirement age policy decisions. The implication is that health, at least for men, and especially for those with lower education, may be negatively impacted by delaying retirement and that, furthermore, savings as a result of such policies may be tempered by increased health care expenditures.

Evaluating community-based health improvement programs. Health Affairs [PubMed] Published January 2018

For article 2, I see that the lead author is a doctoral student in health policy at Harvard, working with colleagues at Vanderbilt. Without intention, this round-up is highlighting two very impressive studies from extremely promising young investigators. This study takes on the challenge of evaluating community-based health improvement programs, which I will call CBHIPs. CBHIPs take a population-based approach to public health for their communities and often focus on issues of prevention and health promotion. Investment in CBHIPs has increased in recent years, emphasizing collaboration between the community and public and private sectors. At the heart of CBHIPs are the ideas of empowering communities to self-assess and make needed changes from within (in collaboration with outside partners) and that CBHIPs allow for more flexibility in creating programs that target a community’s unique needs. Evaluations of CBHIPs, however, suffer from limited resources and investment, and often use “easily-collectable data and pre-post designs without comparison or control communities.” Current overall evidence on the effectiveness of CBHIPs remains limited as a result. In this study, the authors attempt to evaluate a large set of CBHIPs across the United States using inverse propensity score weighting and a difference-in-difference analysis. Health outcomes on poor or fair health, smoking status, and obesity status were used at the county level from the BRFSS (Behavioral Risk Factor Surveillance System) SMART (Selected Metropolitan/Micropolitan Area Risk Trends) data. Information on counties implementing CBHIPs was compiled through a series of systematic web searches and through interviews with leaders in population health efforts in the public and private sector. With information on the exact years of implementation of CBHIPs in each county, a pre-post design was used that identified county treatment and control groups. With additional census data, untreated counties were weighted to achieve better balance on pre-implementation covariates. Importantly, treated counties were limited to those with CBHIPs that implemented programs related to smoking and obesity. Results showed little to no evidence that CBHIPs improved population health outcomes. For example, CBHIPs focusing on tobacco prevention were associated with a 0.2 percentage point reduction in the rate of smoking, which was not statistically significant. Several important limitations of the study were noted by the authors, such as limited information on the intensity of programs and resources available. It is recognized that it is difficult to improve population-level health outcomes and that perhaps the study period of 5-years post-implementation may not have been long enough. The researchers encourage future CBHIPs to utilize more rigorous evaluation methods, while acknowledging the uphill battle CBHIPs face to do this.

Through the looking glass: estimating effects of medical homes for people with severe mental illness. Health Services Research [PubMed] Published October 2017

The third article in this round-up comes from a publication from October of last year, however, it is from the latest issue of Health Services Research so I deem it fair play. The article uses the topic of medical homes for individuals with severe mental illness to critically examine the topic of heterogeneous treatment effects. While specifically looking to answer whether there are heterogeneous treatment effects of medical homes on different portions of the population with a severe mental illness, the authors make a strong case for the need to examine heterogeneous treatment effects as a more general practice in observational studies research, as well as to be more precise in interpretations of results and statements of generalizability when presenting estimated effects. Adults with a severe mental illness were identified as good candidates for medical homes because of complex health care needs (including high physical health care needs) and because barriers to care have been found to exist for these individuals. Medicaid medical homes establish primary care physicians and their teams as the managers of the individual’s overall health care treatment. The authors are particularly concerned with the reasons individuals choose to participate in medical homes, whether because of expected improvements in quality of care, regional availability of medical homes, or symptomatology. Very clever differences in estimation methods allow the authors to estimate treatment effects associated with these different enrollment reasons. As an example, an instrumental variables analysis, using measures of regional availability as instruments, estimated local average treatment effects that were much smaller than the fixed effects estimates or the generalized estimating equation model’s effects. This implies that differences in county-level medical home availability are a smaller portion of the overall measured effects from other models. Overall results were that medical homes were positively associated with access to primary care, access to specialty mental health care, medication adherence, and measures of routine health care (e.g. screenings); there was also a slightly negative association with emergency room use. Since unmeasured stable attributes (e.g. patient preferences) do not seem to affect outcomes, results should be generalizable to the larger patient population. Finally, medical homes do not appear to be a good strategy for cost-savings but do promise to increase access to appropriate levels of health care treatment.

Credits

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Despite our best efforts, we’ve ended up without a guest for Thesis Thursday this month. Rather than try and let the January 2018 edition slide by unnoticed, I thought I should take the opportunity to write something a bit different on the subject.

The premise for Thesis Thursday is that there’s lots of exciting research going on around the world by early career researchers as part of doctoral programmes. One of the reasons we think Thesis Thursday is useful (as well as providing insight into the lives of health economics PhD students) is that it exposes readers to research that they might not otherwise get to see until after a long drawn-out publication process or, worse, that might never see the light of day at all.

In this blog post I’ll provide some insight into how we find candidates for Thesis Thursday and how you – between instalments – can get your thesis fix. Or, more likely, how you might be able to use PhD theses more in your research.

The big databases

There are some major repositories around the world for doctoral theses. If you’re looking for a thesis from a British university then your first stop should be EThOS, hosted by the British Library. The search function will be familiar to anyone who has used a bibliographic database. You can also limit your searches by award year and whether or not the thesis is available for immediate download (more on this in a moment).

A good resource for North American theses (and dissertations) is ProQuest, though it’s unfortunately only available to those with a subscription – institutional or otherwise. There is a health economics subject page with a weak collection of 72 theses (none more recent than 2012). But if you dig deeper using search terms you will find a wealth of PhD outputs from universities you’ve never even heard of. The quality is variable, but there are some excellent pieces of work buried in here. We’ll be trying to publicise them using Thesis Thursday.

There are plenty of other databases that bring together theses from multiple sources; these are simply the databases that I use. Honourable mentions also go to Open Access Theses and Dissertations and the NDLTD archive, which seem to have a better international reach than many others.

Institutional repositories

Most universities have their own internal thesis repositories. Most British universities use the standard EPrints system, so their use is familiar. While I’m reluctant to reinforce the Sheffield-York axis of power, the White Rose thesis repository is particularly useful for health economics theses. It’s a doddle to find the latest theses from ScHARR, CHE, and AUHE, though I’m not entirely convinced that they have complete coverage. Further afield in Europe, Erasmus has a good repository of health economics theses. Or, if you’ve been practising your Dutch, you can find a larger repository that includes the likes of Tilburg and Groningen.

Most theses in institutional repositories are embargoed. This means that it isn’t possible to download the thesis unless you make a special request and are granted permission. These theses aren’t likely to be chosen to be featured on the blog because they pose the additional challenge of trying to get sight of the work itself. I wish everybody would make their thesis freely accessible…

A call for candidates

Today’s Thesis Thursday didn’t happen because we weren’t able to find a guest who felt able to contribute. Recent graduates can be hard to track down. Email addresses stop working and subsequent affiliations (if any) are not always clear. If you would like to feature in an upcoming Thesis Thursday or you’d like to recommend someone, get in touch. We shan’t hold it against you if your thesis is not available online, but please be ready with your PDF!

Credit

Read Full Article
Visit website
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free year
Free Preview