Family-wise error rate

In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests.

History

Tukey first coined the term experimentwise error rate and "per-experiment" error rate for the error rate that the researcher should use as a control level in a multiple hypothesis experiment.

Background

Within the statistical framework, there are several definitions for the term "family":

  1. To take into account the selection effect due to data dredging
  2. To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision

To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Benjamini).

Classification of multiple hypothesis tests

The following table defines various errors committed when testing multiple null hypotheses. Suppose we have a number m of multiple null hypotheses, denoted by: H1, H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing the test results over Hi  will give us the following table and related random variables:

Null hypothesis is True (H0) Alternative hypothesis is True (H1) Total
Declared significant V S R
Declared non-significant U T m - R
Total m_0 m - m_0 m

Definition

The FWER is the probability of making at least one type I error in the family,

 \mathrm{FWER} = \Pr(V \ge 1), \,

or equivalently,

 \mathrm{FWER} = 1 -\Pr(V = 0).

Thus, by assuring  \mathrm{FWER} \le \alpha\,\! \,, the probability of making even one type I error in the family is controlled at level \alpha\,\!.

A procedure controls the FWER in the weak sense if the FWER control at level \alpha\,\! is guaranteed only when all null hypotheses are true (i.e. when m_0 = m so the global null hypothesis is true).

A procedure controls the FWER in the strong sense if the FWER control at level \alpha\,\! is guaranteed for any configuration of true and non-true null hypotheses (including the global null hypothesis).

Controlling procedures

For a broader coverage related to this topic, see Multiple testing correction.
Further information: List of post hoc tests

The following is a concise review of some of the classical solutions that ensure strong level \alpha FWER control, followed by some newer solutions.

The Bonferroni procedure

Main article: Bonferroni correction

The Šidák procedure

Main article: Šidák correction

Tukey's procedure

Main article: Tukey's range test

Holm's step-down procedure (1979)

This procedure is uniformly more powerful than the Bonferroni procedure.[2] It is worth noticing here that the reason why this procedure controls the family-wise error rate for all the m hypotheses at level α in the strong sense, is because it is a closed testing procedure. As such, each intersection is tested using the simple Bonferroni test.

Hochberg's step-up procedure (1988)

Hochberg's step-up procedure (1988) is performed using the following steps:[3]

Hochberg's procedure is more powerful than Holms'. Nevertheless, while Holm’s is a closed testing procedure (and thus, like Bonferroni, has no restriction on the joint distribution of the test statistics), Hochberg’s is based on the Simes test, so it holds only under non-negative dependence.

Dunnett's correction

Main article: Dunnett's test

Charles Dunnett (1955, 1966) described an alternative alpha error adjustment when k groups are compared to the same control group. Now known as Dunnett's test, this method is less conservative than the Bonferroni adjustment.

Scheffé's method

Main article: Scheffé's method

Resampling procedures

The procedures of Bonferroni and Holm control the FWER under any dependence structure of the p-values (or equivalently the individual test statistics). Essentially, this is achieved by accommodating a `worst-case' dependence structure (which is close to independence for most practical purposes). But such an approach is conservative if dependence is actually positive. To give an extreme example, under perfect positive dependence, there is effectively only one test and thus, the FWER is uninflated.

Accounting for the dependence structure of the p-values (or of the individual test statistics) produce more powerful procedures. This can be achieved by applying resampling methods, such as bootstrapping and permutations methods. The procedure of Westfall and Young (1993) requires a certain condition that does not always hold in practice (namely, subset pivotality).[4] The procedures of Romano and Wolf (2005a,b) dispense with this condition and are thus more generally valid.[5][6]

Alternative approaches

Further information: False discovery rate

FWER control exerts a more stringent control over false discovery compared to false discovery rate (FDR) procedures. FWER control limits the probability of at least one false discovery, whereas FDR control limits (in a loose sense) the expected proportion of false discoveries. Thus, FDR procedures have greater power at the cost of increased rates of type I errors, i.e., rejecting null hypotheses of no effect when they should be accepted.[7]

On the other hand, FWER control is less stringent than per-family error rate control, which limits the expected number of errors per family. Because FWER control is concerned with at least one false discovery, unlike per-family error rate control it does not treat multiple simultaneous false discoveries as any worse than one false discovery. The Bonferroni correction is often considered as merely controlling the FWER, but in fact also controls the per-family error rate.[8]

References

  1. Hochberg, Y.; Tamhane, A. C. (1987). Multiple Comparison Procedures. New York: Wiley. ISBN 0-471-82222-1.
  2. Aickin, M; Gensler, H (1996). "Adjusting for multiple testing when reporting research results: the Bonferroni vs Holm methods". American Journal of Public Health 86 (5): 726–728. doi:10.2105/ajph.86.5.726. PMC 1380484. PMID 8629727.
  3. Hochberg, Yosef (1988). "A Sharper Bonferroni Procedure for Multiple Tests of Significance" (PDF). Biometrika 75 (4): 800–802. doi:10.1093/biomet/75.4.800.
  4. Westfall, P. H.; Young, S. S. (1993). Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment. New York: John Wiley. ISBN 0-471-55761-7.
  5. Romano, J.P.; Wolf, M. (2005a). "Exact and approximate stepdown methods for multiple hypothesis testing". Journal of the American Statistical Association 100: 94–108. doi:10.1198/016214504000000539.
  6. Romano, J.P.; Wolf, M. (2005b). "Stepwise multiple testing as formalized data snooping". Econometrica 73: 1237–1282. doi:10.1111/j.1468-0262.2005.00615.x.
  7. Shaffer, J. P. (1995). "Multiple hypothesis testing". Annual Review of Psychology 46: 561–584. doi:10.1146/annurev.ps.46.020195.003021.
  8. Frane, Andrew (2015). "Are per-family Type I error rates relevant in social and behavioral science?". Journal of Modern Applied Statistical Methods 14 (1): 12–23.

External links

This article is issued from Wikipedia - version of the Sunday, May 01, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.