Multiple Criteria Decision Analysis (MCDA) for Health Care Decision Making – overview of guidelines


Name Affiliation
Aleksandra Zawodnik
Department of Experimental and Clinical Pharmacology – Head of Department: Prof. Dagmara Mirowska-Guzel, Medical University of Warsaw, Warsaw, Poland
Maciej Niewada
Department of Experimental and Clinical Pharmacology – Head of Department: Prof. Dagmara Mirowska-Guzel, Medical University of Warsaw, Warsaw, Poland
contributed: 2018-11-25
final review: 2019-02-03
published: 2019-02-05

Multidimensional context of decision making in health care implies the need for structured approach which can be supported by Multiple Criteria Decision Analysis (MCDA).  Despite the fact that MCDA is more widely discussed and used in health care decision making there are still only a few publications available on guidelines and best practice on conducting good quality research. This paper aims to compare the published guidelines for conducting and implementing MCDA in health care decision making. Five most recent publications (either guidelines or reviews) were identified. All publications framed MCDA into a continuous step-by-step process, which should start with defining the decision problem followed by selecting criteria, measuring the performance, choosing the method and conducting scoring and weighting, aggregating values and weights, conducting sensitivity analysis and presenting the results. This review identifies key steps and methods used in MCDA as reported in guidelines. We aimed to compare publications and report on well recognized and most often adopted approaches and tools in MCDA.

Keywords: decision making, health care, MCDA, multiple criteria decision analysis, multi criteria decision analysis, guidelines, best practice


Decision making in health care can vary from macro-level decisions of the payer on allocating the scarce resources within limited budget to patient-level decisions related to treatment alternative options. Both decision levels may involve different stakeholders and require confronting trade-offs between the analyzed alternatives and prioritization among them. Due to complexity of the decision there is a need for structured approach enabling to confront different, usually unrelated criteria. It is needed to avoid inconsistency, variability or a lack of predictability on a particular factor’s or criterion’s importance. [1, 2]

Definition of Multiple Criteria Decision Analysis (MCDA)

According to the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Task Force, Multiple Criteria Decision Analysis (MCDA) is a set of techniques based on using structured, explicit approaches to decisions involving multiple criteria which can improve the quality of decision making. There is also emphasized that such approach can ensure the clarity of choosing the relevant criteria and its importance. [1]

The methodological approaches to MCDA can be based on modeling and non-modeling methods. Among the modeling approaches value measurement models, outranking models  and reference-level models (also known as goal or aspiration models) can be identified. Value measurement models are seen to be the most common among the MCDA studies in health care. Non-modeling approaches include e.g. "performance matrix/tables" which summarize the performance of the alternative against each criterion. [1,3]

Areas of implementation

MCDA methods can be implemented in health care decision making in different contexts and areas. On the basis of literature search conducted by Marsh et al., MCDAs were most commonly undertaken to support coverage/reimbursement decision.[4] There are some concerns about the approach focusing on QALY framed value (cost-utility analysis), which cannot capture all relevant factors. Especially according to the assessment of orphan drugs or late stage oncology treatments standard economic evaluation is not suitable. Therefore, MCDA framework was proposed as a mechanism taking into account broad spectrum of criteria. However, MCDA should not be perceived as an alternative approach to economic evaluation but rather as a complimentary solution in the context of health technology assessment (HTA). MCDA could offer wider perspective, more comprehensive approach and generally support decision making. [4,5,6,7]

MCDA used on patient level can support prescribing or treatment management decisions.[4] Those methods can be used to estimate the value of medical treatments from patient perspective, e.g. using the probabilistic multi-criteria approach to determine patient-weighted value of treatments and treatment outcomes. [2] Another example is shared decision making (SDM) which relates to decisions made by patients in cooperation with their doctors on treatment choice.[8]

MCDAs were less commonly implemented in authorization processes and research interest as well as prioritization of research funding or portfolio decision analysis (PDA). [1, 4] Approach proposed in 2007 by Mussen et al. was further implemented in registration procedures.[9] Drugs-related benefit–risk assessments (BRA) are implemented for a new drug during the marketing authorization process.[4, 10, 11] Both US Food and Drug Administration and the European Medicines Agency and in addition the Pharmacoepidemiological Research on Outcomes of Therapeutics by a European Consortium have proposed MCDA as a tool for consistent and transparent approach to assessing drugs.

Another examples of implementing MCDA in health care are priority setting frameworks to decide on allocating resources by budget holders and prioritizing patient’s access to health care.[1] Polish practical example of MCDA like approach used by decision makers in decision on funding is IOWISZ tool -  “Evaluation Instrument of Investment Motions in Health Care”. [12] Descriptive analysis of different areas in health care decision making was proposed in ISPOR guidelines by Thokala et al. regarding examples of stakeholders involved, relevant criteria and type of decision. [1]

Comparison of guidelines

Due to the challenges related to many MCDA methods available and limited experience of the MCDA implementation in health care, there is a strong need for guidelines and descriptions of key steps in conducting MCDA in this area. This paper aims to compare the published guidelines for conducting and implementing MCDA methods in health care decision making. Therefore, it is not a “cookbook” or manual of MCDA exercise but rather summary of key steps and elements as well as appropriate methods of conduct. Precise description of these methods is far beyond the scope and readers are asked to search for details in identified and cited publications. Nonsystematic search was performed in PubMed with the use of search terms “MCDA”, “multiple criteria decision analysis”, “multi criteria decision analysis”, “guidelines”, “recommendation”. The references of identified studies were also searched. As a result of the search two guidelines publications and three reviews of MCDA methodology were identified and further analyzed.

The most comprehensive guidelines were published by MCDA ISPOR Task Force. The guidelines capture the key steps and an overview of the principal methods of MCDA used to support decision making regardless the area of health care.[1, 13] Another guidelines found in the literature published by Angelis and Kanavos also in 2016 focused on the application of MCDA in value-based assessment of new medicinal technologies in the context of HTA. [5] No other specified guidelines were found to support the implementation of MCDA strictly in the health care decision making. However, few publications were reviewing and discussing methodology, key points, challenges and solutions in conducting the MCDA in health care decision problems. [3, 11, 13] Table 1. shows the comparison of identified literature and steps in conducting MCDA proposed in each publication.

Table 1. Comparison of identified guidelines and reviews of MCDA methodology with described steps in conducting MCDA.



Context of the MCDA application

Steps in conducting an MCDA


ISPOR  [1, 13]


Description of the key steps and an overview of the principal methods of MCDA used to support decision making regardless the area of health care.

1. Defining the decision problem

2. Selecting and structuring criteria

3. Measuring performance

4. Scoring alternatives

5. Weighting criteria

6. Calculating aggregate scores

7. Dealing with uncertainty

8. Reporting and examination of findings

Angelis et al. [5]


Robust methodological framework for the application of MCDA in the context of health technology assessment - proposition of the process based on multi-attribute value theory methods (MAVT).

1. Problem structuring – Establishing the decision context

2. Model building - Construction of value judgments

3. Model assessment - Construction of value judgments

4. Model appraisal - Elicitation of preferences

5. Development of action plans - Implementation of the results

Reviews of MCDA methodology in health care

Muhlbacher et al. [3]


Description of the MCDA framework

and identification of the potential areas of MCDA use.

1. Definition of the decision problem

2. Determination of alternatives

3. Establishing the decision criteria

4. Measurement of target achievement levels

5. Scoring the target achievement levels

6. Weighting of target criteria

7. Aggregation of measurement results

8. Ranking of alternatives

Garcia-Hernandez et al. [11]


Identification of the challenges associated with bias control and presentation of the solutions to overcome them in MCDA for the Benefit-Risk Assessment (BRA) of medicines.

Common challenges and crucial steps:

1. Identification of criteria

2. Scoring

3. Weighting

4. Probabilistic sensitivity analysis

Diaby et al. [14]

Step-by-step guide on how to use MCDA methods for reimbursement decisions making in health care.

1.       Definition of the problem

2.       Identification of criteria for decision-making

3.       Selection of the multi-criteria evaluation model

4.       Application of MCDA method

5.       Aggregating values and weights

6.       Sensitivity analysis

7.       Robustness analysis

8.       Identifying the valid conclusions

The definitions of the steps vary and can be related to the different contexts of publications. The ISPOR Task Force guidelines are most universal and comprehensive, therefore the classification of the steps specified in this publication will be used as a reference to compare detailed guidelines related to each step (Table 2.). We then discuss MCDA step by step and compare identified “state of art” publications.

Defining the decision problem

Defining the decision problem is the first step of MCDA identified either by the ISPOR Task Force guidelines or all other publications. It is also described as a crucial step for the MCDA process which can ensure that it will meet the decision makers’ expectations. Garcia-Hernandez et al. shortly describes it as an “identification of elements such as indication, medical need, target population and available therapeutic options” and provides no more specific recommendations.[11] Four of the identified publications divide the types of decision problem by MCDA’s objectives to: ranking alternatives, choice problems, sorting problems or understanding the value of alternatives. Nearly all of the publications stress that considered alternatives should be identified. It is a very important step in MCDA and therefore Muhlbacher et al. structures it as a separate second step of the process.[1,3,5,13,14]

Both, ISPOR guidelines and Angelis et al. mention the need to identify country specific stakeholders. [1,5,13] As a tool which may help structuring the decision problem The Criteria, Alternatives, Stakeholders, Uncertainty and Environment (CAUSE) checklist [15] or soft system methodology [16, 17] are given as an example. Soft system methodology is the analysis of complex decision problems in case when there are different views about the definition of the problem, hence “soft problems”. It is widely used methodology based on the seven steps starting from the formulating the decision problem, building conceptual models of the systems and comparing them with real world situations. However, it was pointed out that its benefit is marginal. Additionally, ISPOR guidelines propose a validation and reporting of the decision problem to decision makers as for each individual MCDA steps.

Selecting and structuring criteria

Next MCDA step relates to selecting and structuring criteria. As a recommended sources of the potential criteria, publications repeatedly list literature reviews, focus groups and interviewing on stakeholders’ priorities. ISPOR guidelines, as well as Angelis et al., Garcia-Hernandez et al. and Diaby et al., determine the key requirements and properties of the chosen criteria including completeness, non-redundancy, non-overlap, preferential/preference independence (meaning that criteria must be mutually exclusive; option’s value score on a criterion can be elicited independently of the knowledge of the option’s performance in the remaining criteria (Angelis et al.), understandability and comprehensiveness. Value trees are recommended as a tool supporting the identification and hierarchisation of the relevant criteria. Only ISPOR guidelines discuss the optimal number of criteria. As a result of the MCDA publications review, an average number of criteria in assessing interventions was 8,2 (ranging from 3-19). However, there is no rule on the optimal number. Angelis et al.  recommends the smallest set, which can ensure the adequate capture of the decision problem, to be implemented to avoid complexity. Validation and reporting of the chosen criteria is described as an important step in three publications. Muhlbacher et al. describes detailed types of criteria which should be incorporated in the health care evaluation such as outcome parameters and benefit dimensions, measured by patient-relevant endpoints and clinical endpoints (including surrogates). [1,3,5,11,13,14]

Measuring performance

Guidelines related to measuring performance are focusing mainly on sources of the data on different alternatives’ performance which include high quality clinical data as systematic reviews and meta-analyses followed by experts’ and patients’ opinions (see Table 2). Only ISPOR guidelines recommend the “performance matrix” or consequence table as a tool to summarize and present performance. The validation of the performance matrix is also described in ISPOR guidelines.

Table 2. Comparison of the identified publications regarding description of “measuring performance” step.


ISPOR guidelines [1,13]

Angelis et al. [5]

Muhlbacher et al. [3]

Garcia-Hernandez et al. [11]

Diaby et al. [14]

Measuring performance

Collect data about the alternatives’ performance on the criteria

Standard evidence synthesis: systematic reviews and meta-analysis.




Elicitation of expert opinion in the absence of “harder” data

(also patients)



Report and justify the sources used to measure performance





Summarize alternatives’ performance

“Performance matrix” should include average performance, variance in this estimate and the sources of data.





Validate and report the performance matrix

Presentation of the performance matrix to decision makers and experts for confirmation






Scoring alternatives

The fourth step of the MCDA is scoring alternatives which aims to assess the stakeholders’ preferences for changes of performance within each of the chosen criteria. ISPOR guidelines classify the scoring methods as compositional or decompositional.

Compositional methods are based on the eliciting stakeholders’ preferences for criteria apart from weighting. The use of compositional methods is recommended by all identified guidelines. The most commonly listed scoring functions cover “bisection” and “difference” methods as well as direct rating with scales e.g. visual analogue scale (VAS) or Simple Multi Attribute Rating Technique (SMART) (see Table 3). Additionally, pairwise comparison methods like AHP (analytical hierarchy process) or MACBETH (Measuring Attractiveness by Categorical Based Evaluation Technique) are mentioned by ISPOR Task Force.

Only ISPOR guidelines also recommend the use of decompositional methods for scoring, which involve assessing the stakeholders’ preferences for overall value of alternative for scores combined with weights as a whole. Those methods will be described in the next section of publication related to weighting. According to ISPOR guidelines, the selection of appropriate scoring method will depend on whether scoring functions or direct rating is required as well as on the level of precision and the cognitive burden posed to stakeholders. The validation of the scoring process is also recommended by ISPOR and consists of eliciting stakeholders’ reasons for their preferences and consistency check. [1,3,5,11,13,14]

Table 3. Description of the compositional scoring and/or weighting methods.



Examples of implementation

Used both for scoring and weighting

“bisection” and “difference” methods

“Bisection” and “difference” methods are types of indirect assessment methods. Scoring functions are based on tracing the shape of the “value function” that relates alternatives’ performance on the criterion to their value to decision makers.

In the “bisection method”, the responder is asked to identify the value point on the attribute scale which is halfway between the two endpoints on the scale.

In the “difference method” the decision-maker must consider different increments on the objectively measured scale and relate these to the difference in values. Given rating enables to define a value function. [1, 13, 18]

The example of use of indirect rating in health care is bisection method described by Tervonen et al. applied in the assessment of statins in primary prevention. Tested outcome is the risk of stroke with the range between 6% and 2%. The responder is asked for the value of x such that a decrease from 6% to x% is equally important as a decrease from x% to 2%. After repeated questions, few given midpoints between two endpoints enable to shape the value function. In the example, if the responder gives x equal 4 the shaped partial value function for stroke is likely to be linear. [19]

Direct rating

Scales are used for rating either importance of alternatives’ performance on each criterion (scoring) or between different criteria (weighting). [1, 13]

The example of direct rating is visual analogue scale (VAS). This method is based on the psychometric theory. [1, 13]

Another example of applying scales in both weighting and scoring was described by Goetghebeur et al. in pilot study of adapting MCDA in health technology assessment. Weights of criteria were elicited on a 5-point scale with 1 representing the least and 5 the most important criteria. Scoring was based on a 4-point scale for each criterion. [20]

Points allocation – for example The Simple Multi Attribute Rating Technique (SMART) is based on a linear additive model. Ratings of the alternatives’ performance on criterion are allocated directly in natural scales appropriate for the criterion. In terms of weighting among criteria, the scales must be converted to a common one. [18]

The example of incorporating the points allocation method in MCDA is described by Sussex et al. in the study using MCDA to value rare diseases medicines. Responders were firstly asked to allocate the criteria to one of three categories of “high”, “medium,” or “low” importance for defining the value of alternative’s performance. Secondly, responders discussed the allocation of weight out of 100 points across the eight predefined criteria. Finally, after establishing the criteria’s weights, the responders rated chosen orphan drugs for their performance of the eight criteria on the rating scale ranged 1 – 7 (worst to best score, respectively). [21]

Study performed by Iskrov et al, also regarding the assessment of orphan medicines in Bulgaria, used the two-step 100-point weight allocation technique. First step was to distribute points among three main categories of criteria, followed by points allocation among particular criteria within each category. A similar technique based on 100-point scale was applied for evaluating performances of alternative technologies on each criterion. [22]

Analytical Hierarchy Process (AHP)

Analytical Hierarchy Process (AHP) is series of comparison amongst the elements of the decision. It can be used to elicit how the criteria are important in certain decision problem as well as how well the compared options fulfill the criteria. Either criteria or option’s performance are compared in pairs. Comparison is conducted with a point scale (usually 1-9) representing the intensity of performance on each criterion or importance among criteria. The scale for comparison can be graphic, verbal or numeric. Number 1 on the scale corresponds to the situation when two elements (option or criteria) being compared can be equal followed by 3, 5, 7 and 9 corresponding to moderately, strongly, very strongly or extremely more important. Conducted comparisons are entered into a matrix. It can be used both for eliciting the relative weights of the chosen criteria as well as generating the rankings of compared alternatives. [23, 24, 25]

AHP was used several times by Dolan et al. in preference assessing studies among different stakeholders – mainly patients and physicians. [26, 27, 28, 29, 30] One of the examples was the assessment of patients’ priorities on screening procedures in colorectal cancer. Separate pairwise comparisons were conducted for every possible pair of criteria with 1–9 scale. [31]

Also van Til et al. used AHP to elicit subjective opinions and quantitatively compare treatments in patients with acquired equinovarus deformity among physicians in the context of limited clinical data. AHP was proven to be the suitable method for the objective decision problem. [32]

Recent MCDA conducted with AHP approach was published by Kuruoglu et al. on weighting the criteria of choosing the family physician by the patients. [33]

Hummel et al. utilized AHP in two studies. First aimed to rank outcome measures in major depression among three groups of stakeholders: patients, psychiatrists and psychotherapists as well as assess the preferences for health care alternatives. [34] Second publication estimated the patients’ preferences on screening procedures in colorectal cancer. [35]

AHP was also used for supporting the assessment and choosing medical devices by hospitals i.e. magnetic resonance imaging (MRI) in Czech Republic. [36]


Measuring Attractiveness by Categorical Based Evaluation Technique (MACBETH) is a software for scoring method based on the additive value model. Questions compare two options at a time (on each criterion or among criteria), asking the responder for only a qualitative preference differences judgement using the seven semantic categories (no, very weak, weak, moderate, strong, very strong, and extreme difference of attractiveness). It leads to generating a numerical scale. [37]

MACBETH was used to develop and conduct audit model of preventive maintenance which was implemented in Spanish hospital. Finally, additive value model was developed with implementation of the criteria weights and scoring values. [37]

Similar approach was proposed by Carnero et al. where MACBETH was used to identify the most suitable maintenance policies regarding medical equipment in health care providers, i.e. dialysis systems. [38]

Used only for weighting

Swing weighting

Swing weighting is used to determine tradeoff weights by comparing overall value gain in one criterion for change from worst to best performance against the corresponding change in other criteria. Other words, the criterion with the largest worst-to-best performance change that matters (i.e. differentiates compared options) is identified first. Then it is used as a reference to estimate relative weights for other criteria. [9, 39]

Swing weighting method was used by Felli et al. in the Benefit-Risk Assessment Model which was used to assess benefit and risk linked to chosen idiopathic short stature (ISS) treatments options. Weights were elicited for criteria like: safety, tolerability, efficacy, life effects and convenience. [39]

Weighting criteria

The aim of the fifth step of MCDA is to capture the preferences which stakeholders have between criteria. The recommended weighting methods are similar to the scoring methods described above. The most commonly recommended compositional methods are direct methods, such as scales and points allocation. Additionally, pairwise comparison (AHP - analytical hierarchy process) and swing weighting are also listed (see Table 3). ISPOR guidelines also mention criteria order ranking method SMARTER (SMART Exploiting Ranks).

The increasing role of decompositional methods in both scoring and weighting was underlined in ISPOR guidelines, but examples of those methods were also mentioned in all of the identified publications. Among the decompositional methods, Discrete Choice Experiment (DCE) and Best-worst scaling as examples of Conjoint Analysis were reported. They differ in the way the task is presented and the question for respondent is asked – either to choose the preferred scenario or additionally what they find best and worst in a scenario - the comparison is showed in Table 3. ISPOR guidelines also refer to examples of using the Potentially Pairwise RanKings of all possible Alternatives (Paprika) method in MCDA in health care. Description of the decompositional scoring and weighting methods is presented in Table 4. These methods have been widely explored and compared, thus their more detailed overview is beyond the scope of our review (for more information please refer to Whitty et al. [41] or publications cited in Table 4.). [1,3,5,11,13,14,40]

Table 4. Description of the decompositional scoring and weighting methods.



Examples of implementation

Discrete Choice Experiment (DCE)

Discrete Choice Experiments are the majority of conjoint analysis studies based on the random utility theory. It is method based on evaluating and choosing by respondents among the set of specific combinations of attributes and levels. The preferences for alternatives are elicited based on people’s intentions expressed in choice questions regarding hypothetical scenarios. Traditional discrete choice experiment asks responders to choose which scenario out of offered ones they would prefer. This enables ranking of responders’ preferences. [42, 43]

There are multiple examples of DCE implementation in health care. Reviews of the published literature conducted by de Bekker-Grob et al. [44], Clark et al. [45] and Salloum et al. [46] identified various studies aiming to elicit preferences of different stakeholders’ groups with discrete choice experiments.

Few of them focused on prioritizing different health care interventions funding in Nepal [47], Norway [48], United Kingdom [49] or Brazil, Cuba and Uganda [50].

Examples of using DCE as an elicitation method in MCDA studies:

-          Youngkong et al. conducted the MCDA to prioritize to AIDS control interventions in Thailand. Criteria were identified and weighted in the discrete choice experiment by different stakeholders. [51]

-          Broekhuizen et al. developed MCDA to rank six HIV infection treatments consisting weighting clinical outcomes with patient preferences. Patient preferences on criteria were collected among African American patients using DCE. [52]

Best-worst scaling (BWS)

Best-worst scaling (also known as maximum-difference scaling) is a type of discrete choice experiment based on selection by responder both the best and the worst option in an displayed set of options (all possible pairs). The rank reflects the maximum difference in preference or importance. It is also perceived as an easier method for responder in comparison to traditional DCE. Literature divides BWS into three variants: object case, profile case and multi-profile case. [53,  54, 55]

Systematic review of the examples of using best-worst scaling method to elicit preferences in health care was conducted by Cheung et al. in 2016. As a result, 62 studies were identified, most of them performed in last two years. Studies answered various decision problems including valuing health outcomes, eliciting trade-offs between health outcomes and patient or consumer oriented outcomes, different stakeholders preferences or priority setting. [56]

Potentially Pairwise RanKings of all possible Alternatives (Paprika)

Paprika is patented method for eliciting preferences involving the decision-makers with developed software named “1000Minds”. Main assumption of the method is asking questions based on choosing between two hypothetical alternatives defined on only two criteria /attributes at a time. It involves a tradeoff between different combinations of criteria. Based on the answers, it adapts and choose next question to ask, therefore it may be recognized as a type of adaptive conjoint analysis. [57, 58]


PAPRIKA was used both for eliciting patients’ preferences as well as health technology prioritization.

One of the few examples of implementing PAPRIKA in health care is a study performed by Golan et al. aiming to prioritize health technologies’ funding in Israel. The framework focused on 4 main variables as incremental benefit and costs, quality of evidence and legal or strategic factors. [59]

PAPRIKA was also used to develop a tool for systemic sclerosis classification by weighting the criteria by clinical experts [60] or developing the Glucocorticoid Toxicity Index (GTI). [61]

Martelli et al. used PAPRIKA method to develop a toll for prioritizing medical devices for funding in French university hospitals. [62]

Apart from the consideration of the cognitive burden on stakeholders, ISPOR guidelines also recommend taking into account level of precision and theoretical concept when selecting the weighting methods.[1,13] The theoretical base of the chosen weighting method must be coherent with the objective of the MCDA. The value-measurement methods include the linear additive methods, multi-attribute value theory (MAVT) and multi-attribute utility theory (MAUT). They are the theoretical basis for “choice – based” and swing weighting and they aim to address the ranking or choice problems by providing the overall value scores with the assumption that the preferences are complete and transitive. [5] Angelis et al. recommends those methods due to their comprehensiveness, robustness and capability to reduce biases.

Angelis et al. also discusses the context of weighting in MCDA implemented specifically in health care which could require the formation of criteria and weights after the choice of alternatives (like in MAVT) rather than ex-ante like approach in direct rating methods. The relative preferences can depend on the alternatives’ performance in the specific context of the decision problem, e.g. the same clinical outcome in two different diseases.[5]

As for the previous steps, the validation of the weighting process is suggested by the ISPOR to make sure that stakeholders’ understanding of the eliciting process is coherent with their responses.

Calculating aggregate scores/Aggregation

The aggregation aims to select the appropriate function to combine scores and weights resulting in getting the “total value” coherent with stakeholders’ preferences. All of the identified publications discuss the application of additive models and multiplicative models (see Table 5). The additive models are most commonly used in the MCDA regarding the health care decision making. They are based on the methodology of weighted sum (the scores and values are multiplied and summed in the weighted average manner). Additive models have an advantage of being easy to communicate to decision makers. On the other hand, the publications underline that they can be applied when there is preferential independence assured – meaning that preferences can be established by comparing the values of one attribute at a time. If the preference independence is not possible, the multiplicative functions are recommended. The other examples of methods suggested by Muhlbacher et al. in the case when weighted sum approach is inapplicable are:

·         Choquet Integral – non-additive model,

·         ordered weighted average (OWA),

·         weighted OWA (WOWA).

Multiplicative models are less frequently implemented and ISPOR suggests to consider the pragmatic simplification and use of more simple additive models when the interactions between criteria are limited. The aggregative methods are not applicable for AHP where the results are matrices of paired comparisons which are analyzed using matrix algebra. [1,3,5,11,13,14]

Table 5. Comparison of the identified publications regarding description of “calculating aggregate scores” step.


ISPOR guidelines [1,13]

Angelis et al. [5]

Muhlbacher et al. [3]

Garcia-Hernandez et al. [11]

Diaby et al. [14]

Calculating aggregate scores/Aggregation

Aggregation formula

Additive model/function

Multiplicative model/function



Validate and report results of the aggregation





Managing the uncertainty

Dealing with uncertainty is one of the final steps of MCDA. According to all of identified guidelines, conducting uncertainty/sensitivity analysis is the recommended way to determine the robustness of the MCDA’s results. ISPOR guidelines and Muhlbacher et al. describe the main types of the uncertainty based mainly on Briggs et al. classification [63]: stochastic, parameter, structural uncertainty, heterogeneity and quality of evidence.

Most of the identified guidelines recommend conducting a deterministic sensitivity analysis. It is also the most used type of sensitivity analysis in already published MCDAs in health care [4]. Deterministic approach seems to be the most appropriate for the performance and criteria weights altered as a single value. The probabilistic sensitivity analysis needs consideration when the uncertainty in different parameters should be analyzed at the same time. Apart from the above examples, the scenario analysis is also mentioned in the guidelines. Another approach for dealing with uncertainty suggested by ISPOR guidelines is including the “confidence” criterion in the model as a negative score related to the risk of uncertainty. Heterogeneity in preferences can be analyzed by using weights and scores obtained from different stakeholder groups in the MCDA model. The results of uncertainty analysis should be reported and justified. [1,3,5,11,13,14]

Reporting and interpreting the results of MCDA

All of the steps described above should be performed to ensure reliability of the MCDA which can support decision making, but it is also underlined that all of the methods and findings should be properly and transparently reported.

ISPOR guidelines proposed a checklist for the stages which should be reported and it is in line with the MCDA steps described in this review. As MCDA should support decision makers, the results must be discussed in the context of the decision problem, for example providing ranking of the alternatives or value measure (including also “value for money”) for each one. The clear description of methods should also ease the interpretation. Some of the guidelines (ISPOR, Garcia – Hernandez et al.) propose the use of graphical or tabular form of the results presentation. [1,3,5,11,13,14]


All identified publications either guidelines or reviews divide the MCDA process into main steps which should be undertaken to ensure the validity of the results.

There are various methods given in the publication for conducting each of steps thus the aim of this review was to identify the most recommended ones. All the steps are described, but the most crucial aspects should be discussed apart from specific methods. All MCDAs in the health care area should be planned in light with strictly defined decision problem. Good analysis of the therapeutic area, unmet needs and clinical context of the chosen problem will ensure that all the most important issues will be covered by the analysis. First, it will support the process of identifying the most suitable stakeholders to elicit their preferences among alternatives and capture the crucial aspects for decision makers. Second, the good understanding of the clinical aspects of problem (especially in the case of ranking clinical alternatives) will enable to identify the most suitable criteria to analyse as well as the best scoring system. Another critical step of conducting MCDA is the way of phrasing the questions which is choosing the right method of scoring and weighting. All recommended methods are described in this publication. Regarding scoring and weighting methods, the publications are consistent in appropriateness of compositional methods implementation, but only ISPOR guidelines consider also decompositional ones in scoring. The uncertainty analysis was considered as the important step of MCDA and tool to show how credible the results are and how they should be interpreted. The deterministic type of sensitivity analysis is the most recommended one. What is worth mentioning, only ISPOR guidelines discuss the importance of appropriate validating and reporting the results as well as conclusion of each step undertaken in the analysis.


Despite the fact that MCDA is more widely discussed and used in health care decision making in various context there are still not many publications regarding guidelines and best practice on conducting good quality research. Only methodically correct studies can be valuable and effectively support decision making in health care either on therapeutic or coverage level.

In the light of not sufficient data on good practices and shared experiences in conducting MCDA in health care area there is still need for further research and working out the best methodology of MCDA in health care.

Both authors declare no relevant conflict of interest.

  1. Thokala P, Devlin, N, Marsh K, et al. Multiple Criteria Decision Analysis for Health Care Decision Making—An Introduction: Report 1 of the ISPOR MCDA Emerging Good Practices Task Force. Value Health 2016;19:1-13.
  2. Broekhuizen H, Groothuis-Oudshoorn CGM, Hauber AB, Jansen JP, IJzerman MJ. Estimating the value of medical treatments to patients using probabilistic multi criteria decision analysis. BMC Medical Informatics and Decision Making. 2015;15:102.
  3. Muhlbacher AC, Kaczynski A, Making Good Decisions in Healthcare with Multi-Criteria Decision Analysis: The Use, Current Research and Future Development of MCDA, Appl Health Econ Health Policy 2016; 14:29–40
  4. Marsh K, Lanitis T, Neasham D, Orfanos P, Caro J, Assessing the Value of Healthcare Interventions Using Multi-Criteria Decision Analysis: A Review of the Literature. Pharmacoeconomics 2014; 32:345–65
  5. Angelis A, Kanavos P, Value-Based Assessment of New Medical Technologies: Towards a Robust Methodological Framework for the Application of Multiple Criteria Decision Analysis in the Context of Health Technology Assessment, PharmacoEconomics,2016; 34:435–446
  6. Schey C, Krabbe PFM, Postma MJ, Connolly MP. Multi-criteria decision analysis (MCDA): testing a proposed MCDA framework for orphan drugs. Orphanet Journal of Rare Diseases. 2017;12:10
  7. Hughes-Wilson W, et al. Paying for the orphan drug system: break or bend? Is it time for a new evaluation system for payers in Europe to take account of new rare disease treatments? Orphanet J Rare Dis. 2012;7:74
  8. Dolan JG, Boohaker E, Allison J, Imperiale TF. Patients’ preferences and priorities regarding colorectal cancer screening. Med Decis Making. 2013; 33:59–70
  9. Mussen F, Salek S, Walker S, A quantitative approach to benefit-risk assessment of medicines – part 1: The development of a new model using multi-criteria decision analysis, Pharmacoepidemiology and Drug Safety. 2007; 16: S2–S15
  10. European Medicines Agency. Benefit-risk methodology project. Work package 4 report: benefit-risk tools and processes. EMA/297405/2012 [cited November 24th 2018]. Available from:
  11. Garcia-Hernandez A, A Note on the Validity and Reliability of Multi-Criteria Decision Analysis for the Benefit–Risk Assessment of Medicines, Drug Saf.2015; 38:1049–1057
  12. Evaluation Instrument of Investment Motions in Health Care [cited November 24th 2018]. Available from:
  13. Marsh K, IJzerman M, Thokala P, et al. Multiple Criteria Decision Analysis for Health Care Decision Making—Emerging Good Practices: Report 2 of the ISPOR MCDA Emerging Good Practices Task Force. Value Health 2016;19:125-137.
  14. Diaby V, Goeree R. How to use multi-criteria decision analysis methods for reimbursement decision-making in healthcare: a step-by step guide, Expert Review of Pharmacoeconomics & Outcomes Research, 14:1, 81-99
  15. Belton V, Stewart TJ. Multiple Criteria Decision Analysis: An Integrated Approach. Kluwer Academic Publishers, MA, 2002
  16. Checkland P, Poulter J. Learning for Action: A Short Definitive Account of Soft Systems Methodology, and its Use Practitioners, Teachers and Students. John Wiley & Sons, Hoboken, NJ, USA,2006
  17. Neves LP, Dias LC, Antunes CH, Martins AG. Structuring an MCDA model using SSM: a case study in energy efficiency. Eur. J. Oper. Res.2009; 199(3), 834–845
  18. Barfod, M. B., Leleur, S. (Eds.) Multi-criteria decision analysis for use in transport decision making. (2 ed.) DTU Lyngby: Technical University of Denmark, Transport. 2014
  19. Tervonen T, Naci H, van Valkenhoef G, Ades AE, Angelis A, Hillege HL, Postmus D, Applying Multiple Criteria Decision Analysis to Comparative Benefit-Risk Assessment: Choosing among Statins in Primary Prevention. Med Decis Making. 2015 Oct;35(7):859-71
  20. Goetghebeur MM, Wagner M, Khoury H, Levitt RJ, Erickson LJ, Rindress D, Bridging health technology assessment (HTA) and efficient health care decision making with multicriteria decision analysis (MCDA): applying the EVIDEM framework to medicines appraisal, Med Decis Making. 2012 Mar-Apr;32(2):376-88
  21. Sussex J, Rollet P, Garau M, Schmitt C, Kent A, Hutchings A, A Pilot Study of Multicriteria Decision Analysis for Valuing Orphan Medicines, Value Health. 2013 Dec;16(8):1163-9
  22. Iskrov G, Miteva-Katrandzhieva T, Stefanov R. Multi-Criteria Decision Analysis for Assessment and Appraisal of Orphan Drugs. Frontiers in Public Health. 2016; 4:214. doi:10.3389/fpubh.2016.00214.
  23. Saaty, T.L., Decision making with the analytic hierarchy process, Int. J. Services Sciences, Vol. 1, No. 1, 2008
  24. Dolan JG, Involving patients in decisions regarding preventive health interventions using the analytic hierarchy process, Health Expect. 2000 Mar; 3(1): 37–45
  25. Hummel JM, Bridges JF, IJzerman MJ, Group decision making with the analytic hierarchy process in benefit-risk assessment: a tutorial, Patient. 2014;7(2):129-40
  26. Dolan JG, Bordley DR. Isoniazid prophylaxis: the importance of individual values. Medical Decision Making, 1994; 14: 1–8.
  27. Dolan JG, Bordley DR. Involving patients in complex decisions about their care: an approach using the analytic hierarchy process. Journal of General Internal Medicine, 1993; 8: 204–209.
  28. Dolan JG, Bordley DR, Miller H. Diagnostic strategies in the management of acute upper gastrointestinal bleeding: patient and physician preferences. Journal of General Internal Medicine, 1993; 8: 525– 529.
  29. Dolan JG. Can decision analysis adequately represent clinical problems? Journal of Clinical Epidemiology, 1990; 43: 277–284.
  30. Dolan JG. Medical decision making using the analytic hierarchy process: choice of initial antimicrobial therapy for acute pyelonephritis. Medical Decision Making, 1989; 9: 51–56.
  31. Dolan JG, Patient priorities in colorectal cancer screening decisions, Health Expect. 2005 Dec;8(4):334-44
  32. van Til JA, Renzenbrink GJ, Dolan JG, Ijzerman MJ, The use of the analytic hierarchy process to aid decision making in acquired equinovarus deformity, Arch Phys Med Rehabil. 2008 Mar;89(3):457-62
  33. Kuruoglu E, Guldal D, Mevsim V, Gunvar T. Which family physician should I choose? The analytic hierarchy process approach for ranking of criteria in the selection of a family physician. BMC Medical Informatics and Decision Making. 2015;15:63.
  34. Hummel MJ, Volz F, van Manen JG, Danner M, Dintsios CM, Ijzerman MJ, Gerber A., Using the analytic hierarchy process to elicit patient preferences: prioritizing multiple outcome measures of antidepressant drug treatment., Patient. 2012;5(4):225-37
  35. Hummel JM, Steuten LG, Groothuis-Oudshoorn CJ, Mulder N, Ijzerman MJ, Preferences for colorectal cancer screening techniques and intention to attend: a multi-criteria decision analysis, Appl Health Econ Health Policy. 2013 Oct;11(5):499-507
  36. Ivlev I, Vacek J, Kneppo P, Multi- criteria decision analysis for supporting the selection of medical devices under uncertainty, European Journal of Operational Research 247(2015)216–228
  37. Bana e Costa C.A.; Vansnick J-C., The MACBETH approach: Basic ideas, software, and an application Advances in Decision Analysis, N. Meskens, M. Roubens (eds.), 1999, Kluwer Academic Publishers, Book Series: Mathematical Modelling: Theory and Applications, vol. 4, pp. 131-157 [cited November 24th 2018] Available from:
  38. Carnero MC, Gómez A, A multicriteria decision making approach applied to improving maintenance policies in healthcare organizations, BMC Med Inform Decis Mak. 2016 Apr 23;16:47
  39. Felli JC, Noel RA, Cavazzoni PA. A multiattribute model for evaluating the benefit-risk profiles of treatment alternatives. Med Decis Making 2009;29:104–15.
  40. F.Hutton Barron, Barrett BE, The efficacy of SMARTER — Simple Multi-Attribute Rating Technique Extended to Ranking, Acta Psychologica.1966;93(1-3):23-36
  41. Whitty, J.A., Oliveira Gonçalves, A.S. A Systematic Review Comparing the Acceptability, Validity and Concordance of Discrete Choice Experiments and Best–Worst Scaling for Eliciting Preferences in Healthcare, Patient (2018) 11: 301
  42. Brett Hauber, A., Fairchild, A.O. & Reed Johnson, F., Quantifying Benefit–Risk Preferences for Medical Interventions: An Overview of a Growing Empirical Literature, Appl Health Econ Health Policy (2013) 11: 319.
  43. Reed Johnson F, Lancsar E, Marshall D, Kilambi V, Mühlbacher A, Regier DA, Bresnahan BW, Kanninen B, Bridges JF, Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force, Value Health. 2013 Jan-Feb;16(1):3-13
  44. de Bekker-Grob EW, Ryan M, Gerard K, Discrete choice experiments in health economics: a review of the literature.Health Econ. 2012 Feb; 21(2):145-72
  45. Clark MD, Determann D, Petrou S, Moro D, de Bekker-Grob EW, Discrete choice experiments in health economics: a review of the literature, Pharmacoeconomics. 2014 Sep;32(9):883-902
  46. Salloum RG, Shenkman EA, Louviere JJ, Chambers DA, Application of discrete choice experiments to enhance stakeholder engagement as a strategy for advancing implementation: a systematic review, Implement Sci. 2017 Nov 23;12(1):140
  47. Baltussen R, ten Asbroek AH, Koolman X, Shrestha N, Bhattarai P, Niessen LW, Priority setting using multiple criteria: should a lung health programme be implemented in Nepal, Health Policy Plan. 2007 May;22(3):178-85
  48. Defechereux T, Paolucci F, Mirelman A, Youngkong S, Botten G, Hagen TP, Niessen LW, Health care priority setting in Norway a multicriteria decision analysis., BMC Health Serv Res. 2012 Feb 15;12:39
  49. Marsh K, Dolan P, Kempster J, Lugon M, Prioritizing investments in public health: a multi-criteria decision analysis, J Public Health (Oxf). 2013 Sep;35(3):460-6
  50. Mirelman A, Mentzakis E, Kinter E, Paolucci F, Fordham R, Ozawa S, Ferraz M, Baltussen R, Niessen LW, Decision-making criteria among national policymakers in five countries: a discrete choice experiment eliciting relative preferences for equity and efficiency, Value Health. 2012 May;15(3):534-9
  51. Youngkong S, Teerawattananon Y, Tantivess S, Baltussen R, Multi-criteria decision analysis for setting priorities on HIV/AIDS interventions in Thailand, Health Res Policy Syst. 2012 Feb 17;10:6
  52. Broekhuizen H, IJzerman MJ, Hauber AB, Groothuis-Oudshoorn CGM. Weighing Clinical Evidence Using Patient Preferences: An Application of Probabilistic Multi-Criteria Decision Analysis. Pharmacoeconomics. 2017;35(3):259-269.
  53. Marley A.A.J., Louviere J.J., Some probabilistic models of best, worst, and best–worst choices, Journal of Mathematical Psychology. 2005;49(6), 464-480
  54. Flynn, Terry N.; Louviere, Jordan J.; Peters, Tim J.; Coast, Joanna (2007-01-01). "Best–worst scaling: What it can do for health care research and how to do it". Journal of Health Economics. 26 (1): 171–189.
  55. Muhlbacher AC, Zweifel P, Kaczynski A, et al. Experimental measurement of preferences in health care using best–worst scaling (BWS): theoretical and statistical issues. Health Econ Rev. 2016;6(1):1–12
  56. Cheung KL, Wijnen BF, Hollin IL, Janssen EM, Bridges JF, Evers SM, Hiligsmann M, Using Best-Worst Scaling to Investigate Preferences in Health Care, Pharmacoeconomics. 2016 Dec;34(12):1195-1209
  57. Hansen, P. and Ombler, F., A new method for scoring additive multi-attribute value models using pairwise rankings of alternatives. J. Multi-Crit. Decis. 2008; Anal., 15: 87–107.
  58. 1000minds website [cited November 24th 2018] Available from:
  59. Golan O, Hansen P, Which health technologies should be funded? A prioritization framework based explicitly on value for money, Israel Journal of Health Policy Research 2012, 1:44
  60. Johnson SR; Naden RP; Fransen J et al., Multicriteria decision analysis methods with 1000Minds for developing systemic sclerosis classification criteria, Journal of Clinical Epidemiology. 2014,  67 (6): 706–14
  61. Miloslavsky EM, Naden RP, Bijlsma JW et al., Development of a Glucocorticoid Toxicity Index (GTI) using multicriteria decision analysis, Ann Rheum Dis. 2017 Mar;76(3):543-546
  62. Martelli N, Hansen P, van den Brink H et al., Combining multi-criteria decision analysis and mini-health technology assessment: A funding decision-support tool for medical devices in a university hospital setting, J Biomed Inform. 2016 Feb;59:201-8
  63. Briggs AH, Weinstein MC, Fenwick EAL, et al. Model parameter estimation and uncertainty: a report of the ISPOR-SMDM Modelling Good Research Practices Task Force Working Group-6. Med Decis Making 2012;32:722–32

About Us

Journal of Health Policy & Outcomes Research (JHPOR) is a peer-reviewed, international scientific journal, covering health policy, pharmacoeconomics and outcomes research in Poland and worldwide. The journal is issued under the auspices of the Polish Society of Pharmacoeconomics.

Subscribe to our newsletter:

Latest Articles

Our Contacts

Śliska 3 lok. 55
00-127 Warszawa
NIP 5252390463
REGON 140936540
KRS 0000277843

2017 © Pro Medicina Foundation