There are three main types of evaluation criteria: natural criteria, constructed scales, proxy criteria.
Natural Criteria
Natural criteria are those that follow from the nature of the attribute itself. The most obvious examples are dollars (for financial or economic impacts), hectares (for habitat), probability of occurrence (for discrete events) and so on. It is best to use natural criteria wherever possible. They are the most are readily understood criteria, as they directly describe the objective they represent. Unfortunately, natural criteria are not always practical to use due to the limitations of modeling ability or because of the complexity of the objective. For example, we might like to know the number of moose per unit area, but it might only be possible to estimate with any certainty the number of hectares of moose habitat. In some cases natural criteria simply don’t exist. There is no natural unit for example for hunter satisfaction. In this case we might prefer instead to use a constructed scale.
Multi-attribute evaluation vs. monetization
SDM uses define a multi-attribute approach to evaluating costs and benefits in which the impacts of alternative policies are reported in natural units (quantitative or qualitative). Cost benefit analysis involves a further step of monetizing those effects using a combination of financial costs and people’s stated “willingness to pay” to avoid adverse effects. There are advantages and disadvantages to both. Cost benefit analysis tends to simplify the decision framework as all costs and benefits are reported in commensurate units (dollars); it relies on value judgments of others external to the decision process to value effects (usually based on survey data with varying degrees of relevance to the decision at hand). Multi-attribute evaluation focuses more on trade-offs among incommensurate endpoints. As a result, it is usually easier for decision makers to understand the true nature of the impacts under consideration. Because a multi-attribute approach does not involve controversial monetization methods, it involves fewer and more transparent assumptions. This tends to facilitate more direct scrutiny of the scientific assumptions used in the analysis. In contrast to cost-benefit analysis, a multi-attribute approach relies heavily on the decision making team or local stakeholders to assess the relative value or importance of effects. A multi-attribute approach does not preclude a formal cost benefit analysis. Cost benefit analysis can be conducted to augment the information from a multi-attribute evaluation. However, a careful multi-attribute evaluation is a necessary first step whether impacts will be subsequently monetized or not.
Constructed Scales
Constructed scales report an impact directly, but using a scale that is constructed for the decision at hand, rather than already in wide usage. Well known examples include:
- Dow Jones Industrial Average
- Richter Scale for earthquakes
- Apgar scale for newborns
- Grade Point Average for students
- Michelin Rating Systems for restaurants
Over time these have become so widely used and commonly interpreted that they function almost like natural criteria. Constructed scales are a practical solution to handling difficult or complex indicators. Constructed scales can range in quality from simple survey-type scales to sophisticated and highly specific impact descriptors. Below is a common and, in our context, mediocre type of scale. With this kind of scale, an expert is asked to select the number that best represents the expected impact of an alternative (click to enlarge).
While simple to design and administer, these kinds of scales of limited value. The main problem is that there is ambiguity surrounding exactly what is meant by a score of two relative to a score of five or seven. If an alternative scores five and another scores seven, how much better is the second alternative relative to the first? Remember, at some point the decision maker may have to trade off this difference against some other criterion, such as dollars. The more precise we can be in defining the difference between two alternatives, the better.
Also, the scale does not provide any opportunity to express the degree of confidence the expert has in the response he or she is giving. The expert might be in highly confident in one number, but making a wild guess for another. The confidence surrounding an expert’s judgment in a value could be a critical factor for a decision maker, if, for example, the decision maker is risk-averse.
Risk-based scales are often even less helpful. Here, experts are asked to define the level of risk (low, medium, high, etc.) to some endpoint (say wild sheep) associated with a proposed alternative. It is almost meaningless to know that the risks posed by a given policy alternative to sheep are considered “medium” as opposed to high or low (See Box 2). All we understand is that in some vague way, an ambiguous aspect of sheep well-being is different, and somehow better in one case than another. How much significance should we read into this difference?
There are various kinds of scales:
- Defined Impact Scales
- Quality / Quantity Scales
- Value Models or Calculated Indices
- Pictures
Key Ideas
- Natural criteria directly describe the objectives and should be used whenever possible
- Multi-attribute evaluation is more transparent and trade-off focused than monetization
- Constructed scales are designed for the decision at hand and report impacts directly, but can be ambiguous