In an ideal world, uncertainties are reduced quickly and efficiently with research, monitoring, or modeling, and information is provided in time to aid decision making. However, it is not always possible to conduct new research to address key uncertainties, and it is seldom possible to eliminate them even with new research. In such cases, decision analysis suggests the elicitation of subjective technical judgments. There is well-established literature on the methods that are required for eliciting defensible and transparent judgments in the face of significant uncertainty and on the opportunities and limitations for using such judgments as aids to improved management (Morgan and Henrion, 1990; Keeney and von Winterfeldt, 1991).
The steps associated with best practice in structured expert judgment include:
- Identify multiple experts based on an explicit selection process and criteria, and include experts from different domains and disciplines of knowledge (e.g., science versus local knowledge).
- Clearly define the question for which a judgment will be elicited, making sure that the question separates (as much as possible) technical judgments from value judgments.
- Decompose complex judgments into simpler ones. This will improve both the quality of the judgment and, to the extent it helps to separate a specific technical judgment from the management outcomes of that judgment, its objectivity.
- Document the expert’s conceptual model. Not only will this help the quality of the judgment and its communication to others, but it will create a clear and traceable account that will facilitate future peer review.
- Use structured elicitation methods to guard against common cognitive biases that have been shown to consistently reduce the quality of judgments (Morgan and Henrion, 1990).
- Express judgments quantitatively where possible. The use and interpretation of qualitative descriptions of magnitude, probability or frequency vary tremendously among individuals. This seems likely to be amplified in a cross-cultural setting.
- Characterize uncertainty in the judgment explicitly, using quantitative expressions of uncertainty wherever possible to avoid ambiguity.
- Document conditionalizing assumptions. Differences in judgments are often explained by differences in the underlying assumptions or conditions for which a judgment is valid.
- Explore competing judgments collaboratively, through workshops involving local and scientific experts, with an emphasis on collaborative learning.
Methods for eliciting probabilistic judgments include:
Fixed value methods. Estimate the probability of being higher or lower than a selected value – what is the probability that abundance (or price or nitrate loading) will be greater than 1000?
Fixed probability methods. Estimate the value associated with a specific probability. “Tell me the abundance (or price or nitrate loading etc.) that you think has only a 5% chance of being exceeded.”
Interval methods. Estimate probabilities associated with intervals. Usually it’s useful to focus on medians and quartiles. Elicit the upper and lower extremes (usually using a fixed probability of 5%). Choose a value of abundance so that there is an equal probability that the true value lies above or below the value. This is the median. Then divide the lower range into two bins so that there is an equal probability that the true value falls in either bin. Then do the same for the upper range.
- Expert judgments are an essential source of information on the consequences of alternatives
- There are well-established methods for eliciting judgments from experts that minimize a range of psychological biases