iDSI Reference Case for Economic Evaluation
The iDSI Reference Case is a principle-based approach to guide the planning, conduct and reporting of economic evaluations. It provides decision makers with relevant and reliable ways to determine the likely implications of implementing a treatment or health service in specific contexts. Its primary focus is on meeting the informational needs of decision makers in low- and middle-income countries (LMICs). It builds on the methods and approaches of the National Institute for Health and Care Excellence (NICE) in the UK, the Health Intervention and Technology Assessment Program (HITAP) in Thailand and the World Health Organization, the iDSI Reference Case helps countries to calculate value for money and to consistently spend their health budgets effectively.
Some of the biggest decisions that must be made within health systems are on how to spend the health budget. These unavoidable decisions will have large consequences. Understanding the expected clinical effect of differing health treatments and services, and how much it will cost to achieve that clinical effect, is called determining the value. When people know the value of the different options, decision making becomes easier and money can be spent more effectively on health.
Explore the following sections:
Using the reference case
The iDSI reference case is an aid to thinking. It provides a systematic way in which to conduct and report on economic evaluations, but also the flexibility to tailor the methodology to specific needs and setting.
The principles of the iDSI reference case describe how to undertake economic evaluations that are fit for purpose, but don’t specify particular metrics or parameter values. The methodological specifications are a non-exhaustive set of options that enable the economic evaluation to adhere to the principles. These principles each contain a suggested implementation methodology and guidance for reporting.
Statements of principle
|1. An economic evaluation should be communicated clearly and transparently to enable the decision maker(s) to interpret the methods and results||· The decision problem must be fully and accurately described · Limitations of the economic evaluation in informing policy should be characterized · Declarations of interest should be reported|
|2. The comparator(s) against which costs and effects are measured should accurately reflect the decision problem.||At a minimum, the following comparative analysis should be undertaken: · The intervention(s) currently offered to the population as defined in the decision problem as the base case comparator · A “do nothing” analysis representing best supportive (non-interventional care) for the population as additional analysis||Clear description of comparator(s) that includes: · Basic descriptive information including setting where comparator is administered · Statement of availability of the comparator across the population being considered Differences between mean costs and effects of the intervention and chosen comparators should be reported as incremental cost effectiveness ratios|
|3. An economic evaluation should considerall available evidence relevant to the decision problem.||· Apply a systematic and transparent approach to obtaining evidence and to judgments about evidence exclusion · Estimates of clinical effect of intervention and comparator(s) should be informed systematic review of the literature · Single-study or trial-based analyses should outline how these are an adequate source of evidence and should ensure that the stated decision problem is specific to particular context and time of the study or trial · Budget and time allocated to perform an economic evaluation should not determine selection of evidence.||Describe approach used to obtain included evidence · Systematic review protocol and evidence search strategies should be made available · List sources of all parameters used in economic evaluation · Describe areas where evidence is incomplete or lacking|
|4. The measure of health outcome should be appropriate to the decision problem, should capture positive and negative effects on length and quality of life, and should be generalisable across disease states.||· Disability-Adjusted Life Years (DALYs) averted should be used. · Other generic measures that capture length and quality of life (eg QALYs) can be used in separate analysis where information is available||Clear description of method of weighting used to inform the DALY plus · Discussion of any important outcomes insufficiently captured by the DALY · If DALYs not used, provide justification with description of impact of alternative measure.|
|5. All differences between the intervention and the comparator in the expected resource use and costsof delivery to the target population(s) should be incorporated into the evaluation.||· Estimates should reflect the resource use and unit costs/prices that may be expected if the intervention is rolled out to the population defined in the decision problem · Costs not incurred in study settings but likely if intervention is rolled out should be captured in the base case analysis · Cost all resource implications relevant to the decision problem, including donated inputs and out of pocket inputs from individuals · Analysis should include estimation of changes in costs estimates due to scalability||· Quantities of resources should be reported separately from their unit costs/prices · Capital and fixed costs should be annuitized over the period of implementation · Description of how the costs have been validated (eg corroboration with similar interventions in similar settings) · Any major differences between predicted (modeled) and realized costs should be explained · Implications of changes in costs due to scalability of the intervention should be reported · Costs should be reported in local currency and in United States dollars. · Costs should be converted to US$ and local currency; date and source of exchange rates should be reported.|
|6. The time horizon used in an economic evaluation should be of sufficient length to capture all costs and effects relevant to the decision problem; an appropriate discount rate should be used to discount cost and effects to present values.||Lifetime time horizon should be used in first instance. · A shorter time horizon may be used where shown that all relevant costs and effects are captured. · 3% annual discount rate for costs and effects in base case, with additional analyses exploring differing discount rates · Additional analysis should explore an annual discount rate that reflects the rate for government borrowings · Where the time horizon is> 30 years, the impact of lower discount rates should be explored in a sensitivity analysis||State the time horizon over which costs and effects have been evaluated, including additional analyses if different time horizons have been explored. · If lifetime time horizon is not used, justify why and report impact of different time horizon(s) · State the discount rate used for costs and effects, and include additional analyses using different discount rates. · If a 3% annual discount rate is not used, justify why and report impact of different discount rate(s)|
|7. Non-health effects and costs associated with gaining or providing access to health interventions that don’t accrue to the health budget should be identified where relevant to the decision problem. All costs and effects should be disaggregated, either by sector of the economy or by whom they are incurred.||· Base case analysis should reflect direct health costs and health outcomes; however the analysis should adopt a disaggregated societal perspective · Non-health effects and costs that fall outside the health budget should be included in additional analysis; the mechanism of inclusion will differ depending on the decision problem and context. · Where external funding or individual OOP payments substitute for costs that would otherwise fall on a health budget, these costs should be included in the base case analysis, however the impact of excluding these should be explored in sensitivity analyses||Clear description of the result of the base case analysis, plus · Alternative analyses exploring impact of individual out of pocket payments and external funding should be explored · Non-health effects and costs that fall outside the health sector should be reported and the mechanisms used to report impact of these cost and effects should be explained and justified · If non-health effects and costs that fall outside the health sector are not included, the reasons should be reported and estimations of the potential impact of these exclusions made|
|8. The cost andeffectsof the intervention onsub-populations within the decision problem should beexplored and the implications appropriately characterized.||Heterogeneity should be explored in population subgroups, where subgroup formation should be informed by: · Relevant effect of the intervention differs in different populations · Characteristics of different populations that may influence the absolute health effects · Characteristics that influence direct costs of provision or other associated costs across the constituency Subgroup analysis should always be determined by: · The evidence base regarding differences in relative effect, baseline risk or other characteristics · Whether the differences are likely to have an important influences on costs and effects||Clear reporting of: · subgroup characteristics, and justification of why particular groups are chosen for subgroup analysis · evidence base used to determine subgroup specification · the cost effectiveness of the intervention in the different subgroups · subgroups with potentially important differences in costs and effects but excluded due to lack of evidence|
|9. The uncertainty associated with an economic evaluation should be appropriately characterized.||The economic evaluation should explore: · Uncertainty in the structure of the analysis · Uncertainty due to source of parameters · Uncertainty due to precision of parameters||The effects of all types of uncertainty should be clearly reported, noting impact on final results. · Uncertainty due to parameter precision should be characterized using sensitivity analyses appropriate to the decision problem. · The likelihood of making the wrong decisions given the existing evidence should be addressed|
|10. The impact of implementing the intervention on health budget and on other constraints should be clearly and separately identified.||· Budget impact analysis should estimate the implications of implementing the intervention on various budgets · Budget impact analysis should reflect the decision problem and the constituency in which the intervention will be used.||Disaggregated and annualized budget impact analysis should be reported that shows budget implications for:
|11. An economic evaluation should explore the equity implications of implementing the intervention.||There are various mechanisms available for assessing equity implications of an intervention. · The method chosen should be appropriate to the decision problem and justifiable to the decision maker · Equity implications should be considered at all stages of the evaluation, including design, analysis and reporting||The method used to incorporate equity implications should be clearly and transparently explained. · A minimum level of reporting should include a description of particular groups within the constituency that may be disproportionately positively or negatively affected by a decision to implement (or not implement) the intervention.|
Why transparency is important for decision-making
The aim of an economic evaluation is to inform resource allocation decisions. If the conduct and results of the economic evaluation are not reported clearly and transparently, then the soundest evidence available will not be informative. Clear and transparent economic evaluations can also improve the transparency of the decision-making process, and consequently improve the accountability of decision-makers to stakeholders in the decision.
Clear and transparent reporting also improves transferability of economic evaluations, as research undertaken in particular contexts may be used to support decision-making in others. Even where the overall results of the economic evaluation may not be generalisable, aspects of the analysis may still inform analysis in other contexts.
A fundamental element of good scientific practice is that results are reproducible. Clear and transparent reporting enhances the capacity of other researchers to reproduce the results.
Part 3 details the Minimum Reporting Standards, which outline those aspects of an economic evaluation that must be reported to ensure minimal compliance with the transparency principle. Minimum Reporting Standards do not impose any additional methodological burden on researchers as they draw on information and data that must be compiled in the course of the economic evaluation.
Importance to decision making
Identifying the comparator against which costs and effects will be measured is critical to ensuring that the economic evaluation accurately informs the decision problem facing the decision-maker.
The choice of comparator determines the comparative costs and benefits associated with the intervention being considered, and will therefore drive the incremental cost effectiveness ratio (ICER).
If the comparator does not reflect the decision problem, the economic evaluation will not be applicable to the decision-maker, as it will not represent the information needed to inform the particular decision. Use of a comparator that does not reflect the decision problem could lead to an inaccurate determination of cost-effectiveness, and facilitate bad decisions.
The method for determining relevant comparators may include:
- The interventions that are currently available to the population (therapies in routine use) as defined in the decision problem
- “Do nothing” – i.e. comparing the new intervention to best supportive care (no intervention);
Current “best practice”
- The treatment or practice most likely to be replaced if the new intervention is adopted
Regardless of the choice of comparator, the incremental costs and effects informing the analysis must reflect the decision problem. Comparative analysis of therapies currently in routine use should form the base case, with additional analysis exploring “do nothing” as a comparator as a minimum requirement.
The most appropriate comparator is not always immediately obvious. They may not always be alternative interventions, but can include different ways of administering the same intervention (such as different regimens or treatment sequences). The place of an intervention in a care pathway will also influence the choice of relevant comparators.
Importance to decision making
Failure to draw on all relevant and available evidence when undertaking an economic evaluation will potentially introduce bias of unknown direction, limiting the capacity of the economic evaluation to inform a “good” decision.
Evidence can be broadly defined as factual material necessary to make an informed decision, and encompasses any information that will be used to qualitatively or quantitatively inform the design, results and conclusions of an economic evaluation. Evidence will be used to inform all elements of an economic evaluation, including the unbiased estimate of the mean clinical effectiveness and the costs and resource use of the interventions being compared.
Judgement may however be necessary as to what constitutes ‘all relevant and available evidence’. The researcher must apply this judgement in a systematic and transparent way when designing the economic evaluation in order to minimise bias. The decision-maker should also assess whether an economic evaluation contains all relevant and available evidence when deciding if it is applicable to the decision problem.
The approach used to consider the relevance and applicability of available evidence to the decision problem should be determined before evidence gathering begins.
This methodological specification does require all economic evaluations to ensure that all parameters are informed by all available and relevant evidence, where the judgements made to determine availability and relevance should be made in a transparent and systematic way. It is not intended that this methodological specification would prevent researchers conducting single-study or within-trial based economic evaluations, as many aspects of the decision problem may be limited to a specific trial of an intervention in a particular context and time. Researchers conducting single-study or trial based analyses are still required to outline how the single study or trial considers all available evidence relevant to the stated decision problem.
While the budget and time available for the study are relevant in determining the feasibility of the economic evaluation and scope of the decision problem, these should not influence any determination of the scope of the relevant evidence. That said, while it is important that a systematic review of the literature is undertaken to obtain estimates of the clinical effects of the intervention and its comparator(s), for some other parameters the collection and synthesis of all information may be prohibitively expensive or time-consuming. In these instances a transparent judgement should be made about the likely implications of not including missing information in the economic evaluation. Where feasible, researchers should explore the implications of alternative judgments about the quality and relevance of evidence (eg. disease natural history or progression and treatment effects). This could include presenting different scenarios that represent different judgements about which evidence ought to be included. The justification for each should be clearly expressed so its plausibility can be properly considered. This should draw on the application of accepted principles of clinical epidemiology to the available studies, indications of inconsistency within the analysis, knowledge of the natural history of disease, and reference to other external evidence where appropriate.
Researchers should clearly state when the evidence available to inform aspects of the economic evaluation is weak or unavailable. This allows the decision-maker to make a judgement on the acceptability of the evidence in informing the decision (see section 2.9 on uncertainty).
Importance to decision-making
It is important to use a measure of health outcome that is broad enough to capture all socially valued aspects of health and is applicable across investment types.
In scenarios where the scope of the decision problem is limited to interventions and comparators that impact either length of life or quality of life, it is still appropriate to use a measure that captures length and quality of life, as this allows proper consideration of the opportunity costs of investing in the intervention.
Using a non-disease specific health outcome measure (i.e. one that is generalisable across disease states) allows consideration of opportunity costs for the entire health sector, and facilitates comparisons across investment types. A disease-specific measure limits the ability of the decision-maker to make reasoned trade-offs between competing investments in different disease states, and can undermine comparability and consistency in decision-making.
The disability-adjusted life-years (DALYs) averted and the quality-adjusted life-years (QALYs) gained are measures that meet the requirements of the outcome principle in the iDSI Reference Case. The QALYs gained and the DALYs averted both provide a measure of quality and length of life, and are generalizable across different disease and therapeutic areas. Researchers will need to exercise judgment in choosing the most appropriate measure(s) for a given economic evaluation. Importantly, both the DALY and the QALY are based on a series of assumptions and simplifications that necessitates judgments about the appropriateness of the methods used to quantify health state preferences and the accuracy of the resultant measures. In addition, the use of DALYs and QALYs implicitly incorporates value judgments such as the additivity of health and ability to compare health across populations and conditions. Researchers should be aware of these judgments and assumptions when conducting and reporting analyses.
Depending on the scope of the decision problem however, the most appropriate outcome measure may sometimes be intervention- or disease-specific, and a generalizable outcome measure may be irrelevant or impractical to calculate. In all cases, a justification of the outcome measure chosen is required. Future iterations of the iDSI Reference Case will provide further guidance for researchers on the appropriate choice and calculation of an outcome measure. The fundamental consideration is that the choice of outcome measure is aligned to the needs of the intended decision maker and that the methods used to calculate the outcome measure are comprehensively and transparently described.
This section is concerned with the costs of delivering interventions, regardless of whether they are incurred by public sector bodies (eg. ministries of health), donors, or other organisations involved in the delivery of health interventions. It should be read in conjunction with Section 2.7 which deals with ‘where’ costs fall and costs outside the direct costs of health care delivery.
Importance to decision making
Decision-makers need to know the resource use and costs associated with different alternatives because more costly alternatives result in foregone benefits of other interventions (and health opportunity costs), and less costly alternatives can free financial resources for investment in other interventions. Costs and resource use do not need to be included in cases where they do not differ between evaluated alternatives, as this will not impact on the difference in cost between alternatives. Examples of this type of cost could be some central level management costs. However, caution should be taken to ensure there is no significant cost difference before resource use or costs are excluded.
Overall costs of interventions (excluding costs that do not vary across alternatives) should be reported as a key component of cost-effectiveness. Where data are adequate, costs of resource inputs to deliver interventions should also be reported. In addition to reporting costs, quantities of resources should be reported separately from their unit costs/prices to help decision-makers assess whether quantities used are appropriate and valid within their jurisdictions, and whether unit costs/prices used in the evaluation are still relevant at the time a decision is made.
In some cases resource items may have been donated, or their costs may fall on the budgets of more than one organization involved in the delivery of health interventions (including international organizations). All resource items involved in the direct delivery of health interventions should be costed because there will always be opportunity costs, even if these fall in other jurisdictions (eg. if a country attracts international funding for the delivery of an intervention). Decision-makers may also be concerned about the source of funds. See Section 2.7 on reporting costs falling on more than one organization’s budget.
The average unit costs of an item of resource use may depend upon the scale at which a new intervention is delivered and the scope of other interventions delivered concomitantly. For example, the cost of each visit to a clinic nurse may differ depending on overall patient throughput in that clinic (scale), as well as on other interventions delivered at the clinic (scope). Average costs that fall (rise) with increasing scale and scope of delivery are called economies (diseconomies) of scale and scope.
Economies of scale and scope may be important and should be incorporated when feasible, particularly when alternatives are likely to differ in their scale and scope of implementation (see Section 2.2 on comparators and Section 2.10 on budget impact). However, in many cases, data from within a jurisdiction will be inadequate to reasonably establish this. Other social objectives may also be important when alternatives involve delivery at different scales and in different locations (eg. if an evaluation involves one comparator being delivered in a community or primary health care setting while another is delivered in a hospital setting). Caution should therefore be applied when applying cost functions if these cannot be supported with reliable evidence, or when other non-health effects may also have social value (see Sections 2.7 on non-health effects and 2.11 on equity).
Primacy should be placed on the transparency, reasonableness and reproducibility of cost estimates, so that different decision-makers can assess whether the results are generalisable to their jurisdictions.
Costs should be estimated so that they reflect the resource use and unit costs/prices that are anticipated when interventions are rolled out in real health care settings. Clinical trial protocol-driven costs not anticipated with rollout should be excluded. Conversely any costs not incurred in a study setting but anticipated in real health care settings should be included.
Overall costs of interventions should be reported as well as costs of resource inputs. In addition, wherever possible it is useful to report quantities of resources separately from their unit costs/prices. In some cases top-down facility level cost estimates provide a useful source of data, particularly if the available resource use and unit cost/price data are insufficiently granular.
Capital and fixed costs can be annualized over the period of implementation, but decision-makers should also consider when costs are likely to be incurred (see Part 2.10 on budget impact).
It is recommended that where possible researchers corroborate their cost estimates with actual costs incurred when implementing the intervention(s) under evaluation, or other similar interventions, in real health care settings, for example using data from feasibility studies or pragmatic trials. This ‘reality check’ will assist the users of economic evaluations to relate the findings to current practice and costs. Where notable differences between predicted (modelled) and actual costs exist, reasons for these differences should be explored.
All resource items involved in the direct delivery of health interventions that are expected to differ between alternatives should be costed. This includes donated inputs (see Part 2.7 on reporting costs falling on more than one organization’s budget). Any resource items that do not differ across alternatives may be excluded.
Economies of scale and scope that are expected with the delivery of interventions should be estimated and incorporated where feasible (NB Part 2.2. recommends interventions are evaluation at different scales of implementation). However, these must be based on reliable data from the jurisdiction concerned. Cost functions should not be imposed if unsupported by reliable evidence.
The mechanism of delivering an intervention is not set exogenously – different delivery mechanisms are usually feasible and the choice of delivery mechanism should be consistent with the overall objectives of the health systems. Researchers should consider heterogeneity of beneficiaries (see Part 2.8), impacts on other budgets, including on individuals (see Parts 2.7, 2.11), and equity considerations (Part 2.10) when using cost functions to evaluate alternative delivery mechanisms.
Costs should be reported in US dollars and in local currency, and any costs that are estimated in other currencies should be converted to US dollars and local currency. The date and source of the exchange rate used should be reported, as well as whether the exchange rate is unadjusted (real) or adjusted for purchasing power parity (PPP). Further iterations of the Gates-RC will contain specifications regarding the use of PPP and real exchange rates.
Importance for decision making
The time horizon is the period over which the costs and effects of the intervention and comparators are calculated. The time horizon used in an economic evaluation is important because any decision made at a point in time will have implications in terms of net intervention effects and resource use extending into the future. The economic evaluation should use a time horizon that is long enough to capture all costs and effects relevant to a decision problem.
When projecting costs and effects into the future, those costs and effects need to be discounted to reflect their value at the time the decision is being made. This ensures that the time preferences of the population affected by the decision are taken into account.
The nature of the interventions and comparators in the decision problem will largely define the appropriate time horizon. The time horizon will often be ‘lifetime’ – ie. the natural length of life of the population cohort in which the analysis is being undertaken. Confirming whether the length of the time horizon is sufficient can be achieved by monitoring the impact of time horizon changes.
It is never appropriate for the time horizon to be determined by the length of time over which evidence is available, as this would lead to incomplete information being made available to the decision-maker. Where data is not available to inform an appropriate time period, some projection of costs and effects into the future will be required.
There are divergent views of the appropriate discount rate to be used in economic evaluations. In addition, the Gates-RC will be used in economic evaluations to inform decisions across constituencies where there will be legitimate and often substantial differences in time preferences for health and wealth. However, to facilitate comparability economic evaluations adhering to the Gates-RC should use and annual discount rate of 3% for both costs and effects as a stated methodological specification.
Use of alternative discount rates is encouraged where appropriate to the decision problem and constituency. These should be presented clearly with justification for their use. In cases where the costs and effects are of particularly long duration (eg. a time horizon of more than 30 years into the future is used) the impact of lower discount rates should be explored in a sensitivity analysis.
Importance to decision-making
Most economic evaluations of health interventions are concerned with how available healthcare resources (eg. the relevant health budget) can be allocated to maximize gains in health outcomes. This requires estimating the direct health intervention costs (accruing to the available health budget) and outcomes that result from delivery of the alternative interventions being considered. If funding an intervention generates more ‘health’ than could be generated from using that funding elsewhere (ie. health opportunity costs) it is considered to be a ‘cost effective’ use of resources.
In addition to health outcomes and direct costs accruing to the health budget, other costs and consequences of interventions may also be relevant, depending on the context of the decision. They include wider impacts on families, communities, and other sectors of the economy (eg. on educational outcomes). They may also include other (direct and indirect) costs that are incurred in gaining access to an intervention or that result from associated health outcomes. For instance, these may include direct costs falling on individuals and families in accessing health interventions (eg. travel, out-of-pocket and care costs), indirect time costs (eg. relating to the productivity of individuals and informal carers), as well as costs falling on other sectors of the economy.
Non-health effects and costs that fall outside the health budget may be important because alternative interventions may result in different non health effects that have social value. They should therefore be included in the analysis but reported separately, with a justification for the selection of the non-health effects and an explanation of how they may be valued.
Deciding which non-health effects and costs that fall outside the health budget should be included in primary analyses is troublesome as it is not clear which costs and effects are deemed socially valuable. Where there is no consensus on how to codify societal preferences, conflicts between different elements of social value may result. A particular concern is that health resources, primarily intended to generate ‘health’, may be used to meet other objectives that society may or may not deem to be as valuable as health itself.
As a result of these difficulties in aggregating different effects, primary analyses should only reflect direct costs to the health budget and direct health outcomes. By presenting non-health effects separately, decision-makers are able to draw their own conclusions as to the relative merits of the different effects.
The issue of whether direct costs faced by individuals and their families should be incorporated into an analysis is also relevant. In those health systems in which a significant proportion of healthcare is funded through out-of-pocket (OOP) payments, there may be good reasons to adopt a perspective broader than that of the health care provider when direct OOP costs substitute for costs that would otherwise fall on the health budget. Researchers should take care that alternatives do shift costs to individuals, and they may choose to incorporate direct OOP costs into primary analyses in such cases. Of central concern is the opportunity costs faced in each case and how these are likely to be valued by society (this may also include concern for financial protection).
Society often values both health and non-health effects differently depending upon who benefits (see Section 2.11 on equity). Similarly, direct health intervention costs may impose different opportunity costs depending upon who is funding the intervention. In many LMICs, health interventions rely on direct funding from multiple sources (for instance national ministries of health may fund recurrent costs; whereas international donors may fund drugs or certain technologies). In these instances donor funds (including the direct provision of drugs and health care materials) may form a significant proportion of the budget available for health. It would therefore be inappropriate for the analysis to disregard the direct impact of an intervention of donor funds; however it is important that recognition is made of the different sources of the available health budget.
For these reasons, it is recommended that direct costs, health effects, non-health effects and costs that fall outside the health sector are disaggregated, so that it is clear who are the beneficiaries and the funders of interventions. This facilitates exploration of health system constraints, budget impacts and opportunity costs (see Part 2.10), and equity issues (see Part 2.11), and enables decision-makers to make assessments of the relative values of each in their own jurisdictions.
Non-health effects can be valued and presented in different units. Valuing non-health effects monetarily has the benefit that both outcomes and costs can be represented in a common metric, but there are contentious methodological issues relating to how to appropriately monetarise outcomes. Alternatively, outcomes can be reported qualitatively or valued in other units, and costs reported monetarily. Thorough exploration of how to value non-health effects is therefore recommended.
The reported base case should reflect direct health care costs and health outcomes, and the analysis should adopt a disaggregated societal perspective, so that the funders and beneficiaries of health interventions can be clearly identified. Inclusion of particular costs and effects within the societal perspective may differ depending on the decision problem and context.
Direct costs incurred by funders where these costs would otherwise accrue to government health budgets, should be included in the base case. However, additional analyses should explore the impact of donor funding, and direct health care costs should be disaggregated between funders if it is known that they contribute differentially to the delivery of interventions.
OOP costs falling on individuals can be included if these displace costs that would otherwise fall on the health budget, however, the impact of excluding OOP costs should be included in sensitivity analyses.
Where there are believed to be important non-health effects and costs falling outside the health budget these should be included in the analysis but reported separately, with a justification for their selection and an exploration of the ways they can be valued. Any non-health effects and costs that fall outside the health budget that potentially conflict with other social objectives should be highlighted and discussed. For example, a particular intervention may be expected to have productivity benefits but its adoption may have an adverse impact on population equity.
Decision-makers should be made aware that interventions with positive incremental direct health costs are also likely to impose non-health opportunity costs associated with health interventions that are foregone (as interventions foregone are also likely to have non-health effects). For example, an intervention for HIV/AIDS may have non-health effects but if adopted may displace interventions for maternal health that have equal or even greater claims to generating positive social value.
Researchers should ensure that non-health effects and costs are not double counted, especially in cost-utility analyses. Double counting can occur where a particular effect (or cost) of an intervention relative to a comparator is attributed to more than one outcome measure – for example, there are debates as to the extent that productivity effects are already captured in quality of life measures.
Direct health costs should be disaggregated by funder. Both health and non-health effects should be disaggregated by characteristics of recipients and beneficiaries (see Part 2.11 on equity); and, in the case of non-health effects, the sector or area in which these are incurred.
Importance to decision-making
It is important to make a clear distinction between uncertainty, variability and heterogeneity. Uncertainty refers to the fact that we do not know what the expected effects of an intervention will be in a particular population of individuals. This remains the case even if all individuals within this population have the same observed characteristics. Variability refers to the fact that responses to an intervention will differ within the population or even in a sub population of individuals or patients with the same observed characteristics. Heterogeneity refers to those differences in response that can be associated with differences in observed characteristics, for example where the sources of natural variability can be identified and understood. As more becomes known about the sources of variability the patient population can be partitioned into sub populations or subgroups, each with a different estimate of the expected effect and costs of the intervention, and of the uncertainty associated with these.
An exploration of heterogeneity thus enables decision-makers to consider whether an intervention should be made available to certain groups of individuals with greatest capacity to benefit (or in whom the costs of provision are worthwhile). It means they have the opportunity to make different decisions for different groups of individuals that lead to improved health outcomes overall given the available resources.
There may, however, be good reasons not to make different decisions about provision based on certain types of observed characteristics. These reasons might include:
i) the difficulty and/or cost of maintaining differential access;
ii) adverse equity implications; or iii) social values that would not support discrimination based on certain types of characteristics.
However, even in these circumstances an exploration of heterogeneity is important for a number of reasons:
- The correct assessment of the cost and effects of providing an intervention across some or all subgroups depends on the effects and costs in each group.
- It enables decision-makers to consider the opportunity costs (the health foregone) of acceding to concerns about differential provision across identifiable groups, adding to consistency and accountability in the application of other social values.
- It can provide a foundation for exploring the health equity implications of how an intervention might be provided, and can identify potential trade-offs between equity objectives and overall health benefits ( eg. for equity reasons it might be considered worthwhile providing an intervention to groups where provision is more costly even though this may reduce health benefits overall).
- It can provide a better understanding of the distributional issues associated with an intervention. This can form the basis for further more targeted research and inform other related decisions.
Since any observed characteristics that affect health benefits and costs of an intervention are relevant in principal, the exploration of heterogeneity should include subgroups where there is good evidence that the relative effect of the intervention will differ (eg. pre-specified subgroups within a clinical trial). However, subgroup analysis can be considered when external evidence suggests (and there are good reasons to believe) that relative effects differ between subgroups even if they have not been pre-specified.
The exploration of heterogeneity should not be restricted to differences in relative effects between different groups of individuals or patients. It should also include exploration of characteristics that will influence absolute health effects, even where the relative effects are the same, such as differences in base line risk of an event or incidence and prevalence of a disease.
There may also be characteristics which are unrelated to relative effects or baseline risk but nevertheless influence the direct costs of provision or other health intervention costs and benefits, such as geographical location or differences in other health care provision.
The question of which sets of observed characteristics to explore should be informed by:
- the evidence base regarding differences in relative effect, baseline risk or other relevant characteristics
- whether any differences are likely to have an important influence on costs and effects.
The analysis should justify how the exploration of heterogeneity has been undertaken with respect to these two considerations. A presumption that certain observed characteristics should not be used to offer differential access to an intervention is not a justification for failing to explore the implications of heterogeneity.
Importance to decision making
Decisions regarding resource allocation in health are unavoidable. All decisions carry a risk that a more optimal course of action could have been selected, and so in making a decision uncertainty must be acknowledged and measured.
For the chances of reaching a good decision to be optimised, the decision-maker needs to be aware of the magnitude of uncertainty in the results of an economic evaluation. All economic evaluations contain uncertainty, and it is important that all types of uncertainty are appropriately presented to the decision-maker. The types of uncertainty include uncertainty about the source of parameters used in the analysis, the precision of those parameters, and whether models or simulations of how the costs and effects of the intervention and comparators will behave are accurate. The characterisation of this uncertainty enables the decision-maker to make a judgement based not only on a likely estimate of the incremental costs and effects of an intervention, but also on the confidence that those costs and effects reflect reality.
Characterising the uncertainty will also enable the decision-maker to be informed about courses of action that could reduce this uncertainty. This could involve delaying implementation to allow for more evidence to be garnered. In this situation, appropriately characterising uncertainty will allow the decision-maker to make an informed trade-off of the value of new information, the implications of potentially delaying treatment to patients or individuals, and irrecoverable costs associated with implementing funding for an intervention.
There are a number of potential selection biases and uncertainties in any economic evaluation, and these should be identified and quantified where possible. There are three types of bias or uncertainty to consider:
Structural uncertainty – for example in relation to the categorisation of different states of health and the representation of different pathways of care. These structural assumptions should be clearly documented and the evidence and rationale to support them provided. The impact of structural uncertainty on estimates of cost effectiveness should be explored by separate analyses of a representative range of plausible scenarios.
Source of values to inform parameter estimates – the implications of different estimates of key parameters (such as estimates of relative effectiveness) must be reflected in sensitivity analyses (for example, through the inclusion of alternative sources of parameter estimates). Inputs must be fully justified, and uncertainty explored by sensitivity analyses using alternative input values.
Parameter precision – uncertainty around the mean health and cost inputs in the model. Distributions should be assigned to characterise the uncertainty associated with the (precision of) mean parameter values. Probabilistic sensitivity analysis (PSA) is preferred, as this enables the uncertainty associated with parameters to be simultaneously reflected in the results of the model. In non-linear decision-models – where there isn’t a straight-line relationship between inputs and outputs of a model (such as Markov models) – probabilistic methods provide the best estimates of mean costs and outcomes. Simple decision trees are usually linear. The mean value, distribution around the mean, and the source and rationale for the supporting evidence should be clearly described for each parameter included in the model. Evidence about the extent of correlation between individual parameters should be considered carefully. Assumptions made about the correlations should be clearly presented.
Where lack of evidence restricts reliable estimations of mean values and their distributions, unsupported assumptions or exclusion of parameters in a PSA will limit its usefulness to characterise uncertainty, and may give a false impression of the degree of uncertainty. For this reason, PSA is not explicitly required in all economic evaluations at this time; however any decision not to conduct PSA should be clearly and transparently explained in the analysis. Future iterations of the Gates-RC will provide further specification on the application of PSA.
Importance for decision-making
It is important to determine the net total costs involved in the deployment of a health intervention on a particular scale (see Section 2.8), as these are also a measure of the value of what must be foregone.
The costs of an intervention (even when capital investment is not required) are unlikely to be evenly spread over time, but often have large initial costs offset by later health benefits and at times, cost savings. Decision-makers responsible for annual budgets must assess the timing of the impact as well as the magnitude of the expected incremental costs when deciding if the benefits of an intervention exceed the health opportunity costs. This becomes especially important when later health benefits or cost savings are uncertain, since implementation will require the commitment of resources that may be unrecoverable should subsequent evidence suggests the intervention is not worthwhile (or not cost effective) and should be withdrawn (see Part 2.9).
In addition to expenditure constraints decision-makers may be subject to other infrastructural or resource limitations, such as lack of laboratory capacity or workforce limitations. Decision-makers (at national, regional, or local level) must be able to assess the impact of an intervention in each of these domains to properly determine whether the benefits exceed the health opportunity costs. This may also facilitate some consideration of which constraints have the greatest impact, and the potential value of policies that modified these, such as removing restrictions on the use of donated resources or increasing investment in training health workers.
Since non-health benefits and costs do not impact health budgets or other constraints on health care, they should be assessed separately (see Part 2.7).
Budget impact should be presented in a manner that is relevant to the decision problem and the needs of the intended decision-maker. The budget impact should be disaggregated and reflect the costs to all parties as a result of implementation of the intervention (cost outputs). This includes (but is not limited to) impact on government and social insurance budgets, households and direct out of pocket expenses, third-party payers, and external donors. Budget impact should be projected annually for a period appropriate to the decision problem.
Importance to decision making
The equity implications of implementing an intervention within a given constituency are important, because decisions concerning resource allocation in health frequently reflect considerations other than efficiency. Important equity considerations may include issues such as whether equal access is given to those in equal need, whether resources are distributed fairly to those with different levels of need, or recognising that interventions such as smoking cessation programmes may simultaneously increase population health and health inequalities. Limiting an economic evaluation to a determination of cost-effectiveness across a population as a whole ignores differences in the capacity to benefit and/or in access to care, and may prevent the decision-maker from appropriately considering the differential impacts of a decision on different subgroups within the population.
It is worth noting that in adhering to certain principles in the Gates-RC some equity implications will be considered implicitly. For example, exploring heterogeneity may involve a consideration of distributional implications of implementing an intervention. However, adherence to the other principles of the Gates-RC will not generally be sufficient to ensure that the equity implications of a decision problem have been adequately explored. For this reason, exploration of equity is a principle that should be addressed in its own right in BMGF-funded economic evaluations.
There are many dimensions to assessing the equity implications of a proposed intervention. Methods employed may be qualitative, such as the seven-step analysis (as used in Miljeteig 2010), or may involve the quantitative assessment of distributive impact and expected trade-offs, using established mechanisms such as the Atkinson index or Gini index. At the most basic level, an exploration of the equity impact may involve a description of particular groups within the constituency that may be disproportionally affected (positively or negatively) by a decision. Adherence to the equity principle is not, however, simply a matter for reporting results. Equity implications should be considered at all stages of an economic evaluation, including the design, analysis and reporting stages. This is important for all types of economic evaluation, including those undertaken along-side clinical trials.
It is not proposed that the Gates-RC be prescriptive about the methods to be used or how equity implications are presented. However, as experience increases, subsequent iterations of the Gates-RC might provide greater guidance for researchers and decision-makers in considering the equity implications of resource allocation decisions in health.
The IDSI Reference Case initiative has provided a foundation for a body of economic evaluation methods research that are of particular importance for policymaking:
1. Assessing relevant evidence for economic evaluation and decision-making (linked to Principle 3: Perspective)
2. Reflecting non-budgetary constraints in economic evaluation (linked to Principle 10: Budget impact and other constraints)
3. The choice of cost-effectiveness thresholds to inform decision-making (Standalone topic also linked to Principle 10)
In order to investigate these methods topics in greater detail and provide guidance to analysts and policymakers, the iDSI Methods Working Groups were established in a collaboration between the University of Glasgow and the London School of Hygiene and Tropical Medicine (leading work on assessing relevant evidence); Erasmus University, Rotterdam (leading on reflecting non-budgetary constraints; particularly human resource constraints); and the Centre for Health Economics, University of York (leading on the choice of cost-effectiveness thresholds).
The findings from the iDSI Methods Working Groups are presented in the following reports:
Assessing relevant evidence
Heggie R, Hawkins N, Wu O. Assessing relevant evidence: Evidence Working Group report
Human resources constraints
Van Baal P, Thongkong N, Severens J L. Human resource constraints and the methods of economic evaluation of health care technologies
Revill P, Ochalek J, Lomas J, Nakamura N, Woods B, Rollinger A, Suhrcke M, Sculpher M, Claxton K. Cost-effectiveness thresholds: guiding health care spending for population health improvements
These reports are intended to be guides to the conduct and use of economic evaluation studies and are not intended to be prescriptive or to restrict the use and development of other types of methods that may also adhere to the Principles.
Explore the practical application of the iDSI Reference Case in different settings, including a summary of the case studies highlighted below.
Informing decisions in global health
Claxton, K., Ochalek, J., Revill, P., Rollinger, A. and Walker, D. (2016) Informing Decisions in Global Health. Cost per DALY thresholds and health opportunity costs.
Using cost-effectiveness thresholds
Revill P, Walker S, Madan J, et al. (2014) Using cost-effectiveness thresholds to determine value for money in low- and middle-income country healthcare systems: Are current international norms fit for purpose? CHE Research Paper 98.
Country-level cost-effectiveness thresholds
Woods B, Revill P, Sculpher M and Claxton K. (2015) Country-level cost-effectiveness thresholds: Initial estimates and the need for further research. CHE Research Paper 109.
Cost-effectiveness workshop summary
Within-country threshold estimation: Indonesia
Led by Marc Suhrcke (University of York), within-country analysis is being conducted to determine cost-effectiveness thresholds in LMICs. Related publications and reports to date include:
Exploration of non-budgetary constraints in a LMIC setting
Peter Smith (Imperial College London) is leading on developing a methods framework for assessing the cost-effectiveness of different delivery platforms (e.g. primary vs. secondary care) in the context of Universal Health Coverage. Related Publications and reports so far include:
Departures from cost-effectiveness recommendations
Hauck, K.D., Thomas, R and Smith, P.C. (2016) Departures from cost-effectiveness recommendations: The impact of health system constraints on priority setting
Including human resource constraints
Thongkong, N, van Baal, P. and Severens, J.L. (2015) Including human resource constraints in health economic evaluations.
The politics of priority setting
Hauck, K and Smith, P. (2015) The politics of priority setting in Health: A political economy perspective. CGD Working Paper.
Cost-effectiveness workshop summary
Using the RC for transmission models, and presenting disaggregated social perspectives
Professor Anna Vassall and colleagues at LSHTM are leading an exploration of presenting the disaggregated social perspectives when conducting economic evaluations of multi-sectoral interventions, and the application of the iDSI RC to transmission model based economic evaluation.
- Identifying key challenges and solutions in applying the iDSI Reference Case to economic evaluations using transmission models with a particular focus on principles 11 (equity) and 8 (heterogeneity)
- Developing methods for reporting and analysing the “disaggregated societal perspective” in the Reference Case (principle no. 7)
The initial research that led to the development of the IDSI Reference Case found wide variation in methodological quality in economic evaluations conducted in LMIC settings.
The Reference Case consolidated this research and created central principles for economic evaluation, but further methodological research is needed to ensure economic evaluation is a useful tool for good decision making in health in LMICs.
iDSI welcomes further initiatives related to methodological research and utilisation of analysis.
Guidelines for benefit-cost analysis
Harvard T.H. Chan School of Public Health are developing guidelines to encourage the conduct of high quality benefit‐cost analyses.
Global health cost consortium
The Global Health Cost Consortium will provide decision-makers with improved resources to estimate the costs of HIV and tuberculosis (TB) programs.