Sometimes the iterative process of simplifying the consequence table through paired comparisons and deliberating about trade-offs, leads to a preferred alternative. In other cases, participants still face a complex decision problem: multiple alternatives, multiple performance measures, and difficult trade-offs. In this case, a formal process for eliciting preferences may be useful. There are a number of tools and approaches available.
One of the most transparent and technically defensible approaches is to develop a value model. This method, rooted in Multi-Attribute Utility Theory (MAUT) suggests that we:
- Weight the performance measures (using a reputable method such as swing weighting)
- Calculate normalized scores for each measure (unitless scores between 0 and 1)
- Calculate weighted scores for each alternative
- Calculate the value, or weighted performance score, of each alternative, where:
Value (weighted score) = w1X1 + w2 X2 + w3X3….
- w1 is the importance weight for PM1… etc
- X1 is the performance score for PM1… etc
The larger the weighted sum, the more the person who performed the weighting ‘prefers’ the alternative (at least in theory).
There are various technical steps and caveats being omitted here. For example, we’re ignoring the effects of uncertainty and risk attitudes, and making the simplifying assumption that preference changes in a linear way between 0 and 1, which might not be true. Further, the specific methods used to elicit weights can have important effects on the outcomes.
Many decision analysis software tools offer help with crunching these numbers. A critically important question, though, is whose weights to use when doing the calculation.
Some options include:
- Use the weights of a single decision maker (is there really one person’s views that matter?)
- Average the weights of several people (is it helpful to quash the voices of outliers?)
- Negotiate the weights (this might work in a group of people with similar values but good luck in a multiparty group 😊)
- Enable each participant to assign their own weights, effectively building their own value model, and use the results to inform group deliberations.
Since weights are expressions of value judgments, think carefully about what makes sense in your context. Trying to land on a single set of weights, either by pushing for agreement or averaging, is often not helpful. It’s also not necessary, as people with very different values (who therefore assign different weights) can end up preferring the same alternative, but for different reasons!
In multi-party discussions, some practitioners advocate the use of individual value modeling (in other words, individual weighting) and emphasize its use as a support, rather than a replacement, for deliberation. There are many ways this can provide insights that support collaborative decision making.
As one example, the figure below shows the weights assigned by one participant in a water use planning process in British Columbia, Canada.
The performance measures are shown across the bottom with the weights assigned by participants on the vertical axis. The markers represent the weights for various measures as assigned by one particular participant (Stakeholder 1) and the vertical line represents the range of weights assigned by all participants. This chart helped to pinpoint productive areas of dialogue. For example, it became clear that there was a high degree of disagreement about the importance assigned to the Flood, Water Quality and Power measures. When people were invited to talk about the reasons for their assigned weights, it became clear that several people had misunderstood the Flood measure (thinking it represented a major dam breach rather than a modest periodic inundation of scattered facilities). This led to a revision of weights. Discussions about water quality were equally productive. It was revealed that some participants believed that there were significant human health risks associated with the increased turbidity, despite studies suggesting otherwise. These people used the water for domestic supply; the cost of being wrong for them was high, and their confidence in the studies was low. This insight led ultimately to the prescription of a monitoring program to test various hypotheses on which the existing analysis was based.
In sum, this exploration of weights helped deliberations by diagnosing areas of agreement and difference and provided a focus for productive discussion. It exposed factual errors, value differences, risk tolerances and key uncertainties, giving participants useful insights that supported an eventual consensus decision.
 Keeney, Ralph (2021). Practical Value Models. In Ward Edwards, Ralph F. Meyer Jr, and Detlof von Winterfeld (Eds). Advances in decision analysis: From foundations to applications (pp. 232-252). Cambridge University Press.
 Clemen. R.T. (2004). Making Hard Decisions: An Introduction to Decision Analysis, 4th edn. Duxbury, Belmont.
 von Winterfeldt D, Edwards W.Decision Analysis and Behavioral Research. Cambridge University Press, Cambridge, UK (1993).
 Gregory, R., Failing, L., Harstone, M., Long, G., McDaniels, T., & Ohlson, D. (2012). Structured decision making: A practical guide to environmental management choices. John Wiley & Sons.