Elementary effects method

The elementary effects (EE) method is the most used screening method in sensitivity analysis. It is applied to identify non-influential inputs for a computationally costly mathematical model or for a model with a large number of inputs, where the costs of estimating other sensitivity analysis measures such as the variance-based measures is not affordable. Like all screening, the EE method provides qualitative sensitivity analysis measures, i.e. measures which allow the identification of non-influential inputs or which allow to rank the input factors in order of importance, but do not quantify exactly the relative importance of the inputs.

Methodology

To exemplify the EE method, let us assume to consider a mathematical model with  k input factors. Let  Y be the output of interest (a scalar for simplicity):

 Y =  f(X_1, X_2, ... X_k).

The original EE method of Morris [1] provides two sensitivity measures for each input factor:

These two measures are obtained through a design based on the construction of a series of trajectories in the space of the inputs, where inputs are randomly moved One-At-a-Time (OAT). In this design, each model input is assumed to vary across p selected levels in the space of the input factors. The region of experimentation \Omega is thus a k-dimensional p-level grid.

Each trajectory is composed of (k+1) points since input factors move one by one of a step  \Delta in \{0, 1/(p-1), 2/(p-1),..., 1\} while all the others remain fixed.

Along each trajectory the so-called elementary effect for each input factor is defined as:

 d_i(X) = \frac{Y(X_1, \ldots ,X_{i-1}, X_i + \Delta, X_{i+1}, \ldots, X_k ) - Y( \mathbf X)}{\Delta}   ,

where  \mathbf{X} = (X_1, X_2, ... X_k) is any selected value in  \Omega such that the transformed point is still in  \Omega for each index  i=1,\ldots, k.

 r elementary effects are estimated for each input  d_i\left(X^{(1)} \right), d_i\left( X^{(2)} \right), \ldots, d_i\left( X^{(r)} \right) by randomly sampling  r points  X^{(1)}, X^{(2)}, \ldots , X^{(r)}. Usually  r ~ 4-10, depending on the number of input factors, on the computational cost of the model and on the choice of the number of levels  p , since a high number of levels to be explored needs to be balanced by a high number of trajectories, in order to obtain an exploratory sample. It is demonstrated that a convenient choice for the parameters  p and  \Delta is  p even and  \Delta equal to 
p/[2(p-1)], as this ensures equal probability of sampling in the input space.

In case input factors are not uniformly distributed, the best practice is to sample in the space of the quantiles and to obtain the inputs values using inverse cumulative distribution functions. Note that in this case  \Delta equals the step taken by the inputs in the space of the quantiles.

The two measures  \mu and  \sigma are defined as the mean and the standard deviation of the distribution of the elementary effects of each input:

 \mu_i = \frac{1}{r} \sum_{j=1}^r d_i \left( X^{(j)} \right) ,
 \sigma_i = \sqrt{ \frac{1}{(r-1)} \sum_{j=1}^r \left( d_i \left( X^{(j)} \right) - \mu_i  \right)^2} .

These two measures need to be read together (e.g. on a two-dimensional graph) in order to rank input factors in order of importance and identify those inputs which do not influence the output variability. Low values of both  \mu and  \sigma correspond to a non-influent input.

An improvement of this method was developed by Campolongo et al.[2] who proposed a revised measure  \mu^* , which on its own is sufficient to provide a reliable ranking of the input factors. The revised measure is the mean of the distribution of the absolute values of the elementary effects of the input factors:

 \mu_i^* = \frac{1}{r} \sum_{j=1}^r \left| d_i \left( X^{(j)} \right) \right| .

The use of  \mu^* solves the problem of the effects of opposite signs which occurs when the model is non-monotonic and which can cancel each other out, thus resulting in a low value for  \mu .

An efficient technical scheme to construct the trajectories used in the EE method is presented in the original paper by Morris while an improvement strategy aimed at better exploring the input space is proposed by Campolongo et al..

References

  1. Morris, M. D. (1991). Factorial sampling plans for preliminary computational experiments. Technometrics, 33, 161–174.
  2. Campolongo, F., J. Cariboni, and A. Saltelli (2007). An effective screening design for sensitivity analysis of large models. Environmental Modelling and Software, 22, 15091518.
This article is issued from Wikipedia - version of the Monday, March 14, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.