False discovery rate

The False discovery rate (FDR) is one way of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the expected proportion of rejected null hypotheses that were incorrect rejections ("false discoveries").[1] FDR-controlling procedures provide less stringent control of Type I errors compared to familywise error rate (FWER) controlling procedures (such as the Bonferroni correction), which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased rates of Type I errors.[2]

History

Technological motivations

The modern widespread use of the FDR is believed to stem from, and be motivated by, the development in technologies that allowed the collection and analysis of a large number of distinct variables in several individuals (e.g., the expression level of each of 10,000 different genes in 100 different persons).[3] By the late 1980s and 1990s, the development of "high-throughput" sciences, such as genomics, allowed for rapid data acquisition. This, coupled with the growth in computing power, made it possible to seamlessly perform hundreds and thousands of statistical tests on a given data set. The technology of microarrays was a prototypical example, as it enabled thousands of genes to be tested simultaneously for differential expression between two biological conditions.[4]

As high-throughput technologies became common, technological and/or financial constraints led researchers to collect datasets with relatively small sample sizes (e.g. few individuals being tested) and large numbers of variables being measured per sample (e.g. thousands of gene expression levels). In these datasets, too few of the measured variables showed statistical significance after classic correction for multiple tests with standard multiple comparison procedures. This created a need within many scientific communities to abandon FWER and unadjusted multiple hypothesis testing for other ways to highlight and rank in publications those variables showing marked effects across individuals or treatments that would otherwise be dismissed as non-significant after standard correction for multiple tests. In response to this, a variety of error rates have been proposed—and become commonly used in publications—that are less conservative than FWER in flagging possibly noteworthy observations.

The false discovery rate concept was formally described by Yoav Benjamini and Yosi Hochberg in 1995[1] as a less conservative and arguably more appropriate approach for identifying the important few from the trivial many effects tested. The FDR has been particularly influential, as it was the first alternative to the FWER to gain broad acceptance in many scientific fields (especially in the life sciences, from genetics to biochemistry, oncology and plant sciences).[3] In 2005, the Benjamini and Hochberg paper from 1995 was identified as one of the 25 most-cited statistical papers.[5]

Related statistical concepts

Prior to the 1995 introduction of the FDR concept, various precursor ideas had been considered in the statistics literature. In 1979, Holm proposed the Holm procedure,[6] a stepwise algorithm for controlling the FWER that is at least as powerful as the well-known Bonferroni adjustment. This stepwise algorithm sorts the p-values and sequentially rejects the hypotheses starting from the smallest p-value.

Benjamini (2010)[3] said that the false discovery rate, and the paper Benjamini and Hochberg (1995), had its origins in two papers concerned with multiple testing:

Definitions

Based on definitions below we can define Q as the proportion of false discoveries among the discoveries \left ( Q = \frac{V}{R} \right ). And the false discovery rate is given by:[1]

\mathrm{FDR} = Q_e =  \mathrm{E}\!\left [Q \right ] = \mathrm{E}\!\left [\frac{V}{V+S}\right ] = \mathrm{E}\!\left [\frac{V}{R}\right ],

where  \frac{V}{R} is defined to be 0 when  R = 0 .

And one wants to keep this value below a threshold q.

Classification of multiple hypothesis tests

The following table defines various errors committed when testing multiple null hypotheses. Suppose we have a number m of multiple null hypotheses, denoted by: H1, H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing the test results over Hi  will give us the following table and related random variables:

Null hypothesis is True (H0) Alternative hypothesis is True (H1) Total
Declared significant V S R
Declared non-significant U T m - R
Total m_0 m - m_0 m

Properties

Adaptive and scalable

Using a multiplicity procedure that controls the FDR criterion is adaptive and scalable. Meaning that controlling the FDR can be very permissive (if the data justify it), or conservative (acting close to control of FWER for sparse problem) - all depending on the number of hypotheses tested and the level of significance.[3]

The FDR criterion adapts so that the same number of false discoveries (V) will have different implications, depending on the total number of discoveries (R). This contrasts with the family wise error rate criterion. For example, if inspecting 100 hypotheses (say, 100 genetic mutations or SNPs for association with some phenotype in some population):

The FDR criterion is scalable in that the same proportion of false discoveries out of the total number of discoveries (Q), remains sensible for different number of total discoveries (R). For example:

The FDR criterion is also scalable in the sense that when making a correction on a set of hypotheses, or two corrections if the set of hypotheses were to be split into two - the discoveries in the combined study are (about) the same as when analyzed separately. For this to hold, the sub-studies should be large with some discoveries in them.

Dependency among the test statistics

Controlling the FDR using the linear step-up BH Procedure, at level q, has several properties related to the dependency structure between the test statistics of the m null hypotheses that are being corrected for. If the test statistics are:

Proportion of true hypotheses

If all of the null hypotheses are true (m_0=m), then controlling the FDR at level q guarantees control over the FWER (this is also called "weak control of the FWER"): \mathrm{FWER}=P\left( V \ge 1 \right) = E\left( \frac{V}{R} \right) = \mathrm{FDR} \le q, simply because the event of rejecting at least one true null hypothesis  \{V \ge 1\} is exactly the event  \{V/R = 1\} , and the event  \{V = 0\} is exactly the event  \{V/R = 0\} (when  V = R = 0 ,  V/R = 0 by definition). [1] But if there are some true discoveries to be made (m_0<m) then FWER FDR. In that case there will be room for improving detection power. It also means that any procedure that controls the FWER will also control the FDR.

Bayesian approaches

Connections have been made between the FDR and Bayesian approaches (including empirical Bayes methods),[11][12][13] thresholding wavelets coefficients and model selection,[14][15][16][17] and generalizing the confidence interval into the False coverage statement rate (FCR).[18]

Controlling procedures

For a broader coverage related to this topic, see Multiple testing correction.

The settings for many procedures is such that we have H_1 \ldots H_m null hypotheses tested and P_1 \ldots P_m their corresponding p-values. We order these p-values in increasing order and denote them by P_{(1)} \ldots P_{(m)}. A small p-value often corresponds to a high test statistic. A procedure that goes from a small p-value to a large one will be called a step-up procedure. In a similar way, in a "step-down" procedure we move from a large corresponding test statistic to a smaller one.

Benjamini–Hochberg procedure

The Benjamini–Hochberg procedure (BH step-up procedure) controls the false discovery rate (at level \alpha).[1] The procedure works as follows:

  1. For a given \alpha, find the largest k such that P_{(k)} \leq \frac{k}{m} \alpha.
  2. Reject the null hypothesis (i.e. declare positive discoveries) for all H_{(i)} for i = 1, \ldots, k.

The BH procedure is valid when the m tests are independent, and also in various scenarios of dependence.[10] It also satisfies the inequality:

E(Q) \leq \frac{m_0}{m}\alpha \leq \alpha

If an estimator of m_0 is inserted into the BH procedure, it is no longer guaranteed to achieve FDR control at the desired level.[3] Adjustments may be needed in the estimator and several modifications have been proposed.[19][20][21][22]

The BH procedure was proven to control the FDR in 1995 by Benjamini and Hochberg.[1] In 1986, R. J. Simes offered the same procedure as the "Simes procedure", in order to control the FWER in the weak sense (under the intersection null hypothesis) when the statistics are independent.[23] In 1988, G. Hommel showed that it does not control the FWER in the strong sense in general.[24] Based on the Simes procedure, Yossi Hochberg proposed Hochberg's step-up procedure (1988) which does control the FWER in the strong sense under certain assumptions on the dependence of the test statistics.[25]

Note that the mean \alpha for these m tests is \frac{\alpha(m+1)}{2m}, the Mean(FDR \alpha) or MFDR, \alpha adjusted for m independent (or positively correlated, see below) tests. The MFDR calculation shown here is for a single value and is not part of the Benjamini and Hochberg method; see AFDR below.

Benjamini–Hochberg–Yekutieli procedure

The Benjamini–Hochberg–Yekutieli procedure controls the false discovery rate under positive dependence assumptions.[10] This refinement modifies the threshold and finds the largest k such that:

P_{(k)} \leq \frac{k}{m \cdot c(m)} \alpha

In the case of negative correlation, c(m) can be approximated by using the Euler–Mascheroni constant.

\sum _{i=1} ^m \frac{1}{i} \approx \ln(m) + \gamma.

Using MFDR and formulas above, an adjusted MFDR, or AFDR, is the min(mean \alpha) for m dependent tests = \frac\mathrm{MFDR}{c(m)}.

The other way to address dependence is by bootstrapping and rerandomization.[4][26][27]

Estimating the FDR

Let \pi_0 be the proportion of true null hypotheses, and \pi_1 = 1-\pi_0 be the proportion of true alternative hypotheses.[28] Then N \pi_0 times the average p-value of rejected effects divided by the number of rejected effects gives an estimate of the FDR.

False coverage rate

Main article: False coverage rate

The false coverage rate (FCR) is the FDR equivalent to the idea of confidence interval. FCR indicates the average rate of false coverage, namely, not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a 1-\alpha level for all of the parameters considered in the problem. Intervals with simultaneous coverage probability 1−q can control the FCR to be bounded by q. There are many FCR procedures such as: Bonferroni-Selected–Bonferroni-Adjusted, Adjusted BH-Selected CIs (Benjamini and Yekutieli (2005)),[18] Bayes FCR (Yekutieli (2008)), and other Bayes methods.[29]

Related error rates

The discovery of the FDR was preceded and followed by many other types of error rates. These include:

Related statistics

See also

References

  1. 1 2 3 4 5 6 7 8 Benjamini, Yoav; Hochberg, Yosef (1995). "Controlling the false discovery rate: a practical and powerful approach to multiple testing" (PDF). Journal of the Royal Statistical Society, Series B 57 (1): 289–300. MR 1325392.
  2. Shaffer J.P. (1995) Multiple hypothesis testing, Annual Review of Psychology 46:561-584, Annual Reviews
  3. 1 2 3 4 5 6 7 Benjamini, Y. (2010). "Discovering the false discovery rate". Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72 (4): 405–416. doi:10.1111/j.1467-9868.2010.00746.x.
  4. 1 2 Storey, John D.; Tibshirani, Robert (2003). "Statistical significance for genome-wide studies" (PDF). Proceedings of the National Academy of Sciences 100 (16): 9440–9445. Bibcode:2003PNAS..100.9440S. doi:10.1073/pnas.1530509100. PMC 170937. PMID 12883005.
  5. Ryan, T. P.; Woodall, W. H. (2005). "The most-cited statistical papers". Journal of Applied Statistics 32 (5): 461. doi:10.1080/02664760500079373.
  6. Holm, S. (1979). "A simple sequentially rejective multiple test procedure". Scandinavian Journal of Statistics 6 (2): 65–70. JSTOR 4615733. MR 538597.
  7. Schweder, T.; Spjøtvoll, E. (1982). "Plots of P-values to evaluate many tests simultaneously". Biometrika 69 (3): 493. doi:10.1093/biomet/69.3.493.
  8. Hochberg, Y.; Benjamini, Y. (1990). "More powerful procedures for multiple significance testing". Statistics in Medicine 9 (7): 811–818. doi:10.1002/sim.4780090710. PMID 2218183.
  9. 1 2 Soric, Branko (June 1989). "Statistical "Discoveries" and Effect-Size Estimation". Journal of the American Statistical Association 84 (406): 608–610. doi:10.1080/01621459.1989.10478811. JSTOR 2289950.
  10. 1 2 3 4 5 Benjamini, Yoav; Yekutieli, Daniel (2001). "The control of the false discovery rate in multiple testing under dependency" (PDF). Annals of Statistics 29 (4): 1165–1188. doi:10.1214/aos/1013699998. MR 1869245.
  11. Storey, John D. (2003). "The positive false discovery rate: A Bayesian interpretation and the q-value" (PDF). Annals of Statistics 31 (6): 2013–2035. doi:10.1214/aos/1074290335.
  12. Efron, Bradley (2010). Large-Scale Inference. Cambridge University Press. ISBN 978-0-521-19249-1.
  13. 1 2 Efron B (2008). "Microarrays, empirical Bayes and the two groups model". Statistical Science 23: 1–22. doi:10.1214/07-STS236.
  14. Abramovich F, Benjamini Y, Donoho D, Johnstone IM; Benjamini; Donoho; Johnstone (2006). "Adapting to unknown sparsity by controlling the false discovery rate". Annals of Statistics 34 (2): 584–653. arXiv:math/0505374. Bibcode:2005math......5374A. doi:10.1214/009053606000000074.
  15. Donoho D, Jin J; Jin (2006). "Asymptotic minimaxity of false discovery rate thresholding for sparse exponential data". Annals of Statistics 34 (6): 2980–3018. arXiv:math/0602311. Bibcode:2006math......2311D. doi:10.1214/009053606000000920.
  16. Benjamini Y, Gavrilov Y; Gavrilov (2009). "A simple forward selection procedure based on false discovery rate control". Annals of Applied Statistics 3 (1): 179–198. arXiv:0905.2819. Bibcode:2009arXiv0905.2819B. doi:10.1214/08-AOAS194.
  17. Donoho D, Jin JS; Jin (2004). "Higher criticism for detecting sparse heterogeneous mixtures". Annals of Statistics 32 (3): 962–994. arXiv:math/0410072. Bibcode:2004math.....10072D. doi:10.1214/009053604000000265.
  18. 1 2 Benjamini Y, Yekutieli Y (2005). "False discovery rate controlling confidence intervals for selected parameters". Journal of the American Statistical Association 100 (469): 71–80. doi:10.1198/016214504000001907.
  19. Storey, J. D.; Taylor, J. E.; Siegmund, D. (2004). "Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: A unified approach". Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66: 187. doi:10.1111/j.1467-9868.2004.00439.x.
  20. Benjamini, Y.; Krieger, A. M.; Yekutieli, D. (2006). "Adaptive linear step-up procedures that control the false discovery rate". Biometrika 93 (3): 491. doi:10.1093/biomet/93.3.491.
  21. Gavrilov, Y.; Benjamini, Y.; Sarkar, S. K. (2009). "An adaptive step-down procedure with proven FDR control under independence". The Annals of Statistics 37 (2): 619. doi:10.1214/07-AOS586.
  22. Blanchard, G.; Roquain, E. (2008). "Two simple sufficient conditions for FDR control". Electronic Journal of Statistics 2: 963. doi:10.1214/08-EJS180.
  23. Simes, R. J. (1986). "An improved Bonferroni procedure for multiple tests of significance". Biometrika 73 (3): 751–754. doi:10.1093/biomet/73.3.751.
  24. Hommel, G. (1988). "A stagewise rejective multiple test procedure based on a modified Bonferroni test". Biometrika 75 (2): 383. doi:10.1093/biomet/75.2.383.
  25. Hochberg, Yosef (1988). "A Sharper Bonferroni Procedure for Multiple Tests of Significance" (PDF). Biometrika 75 (4): 800–802. doi:10.1093/biomet/75.4.800.
  26. Yekutieli D, Benjamini Y (1999). "Resampling based False Discovery Rate controlling procedure for dependent test statistics". J. Statist. Planng Inf. 82: 171–196. doi:10.1016/S0378-3758(99)00041-5.
  27. van der Laan, M. J. and Dudoit, S. (2007). Multiple Testing Procedures with Applications to Genomics. New York: Springer.
  28. 1 2 Storey, John D. (2002). "A direct approach to false discovery rates" (PDF). Journal of the Royal Statistical Society, Series B 64 (3): 479–498. doi:10.1111/1467-9868.00346.
  29. Zhao, Z.; Gene Hwang, J. T. (2012). "Empirical Bayes false coverage rate controlling confidence intervals". Journal of the Royal Statistical Society: Series B (Statistical Methodology): no. doi:10.1111/j.1467-9868.2012.01033.x.
  30. Sarkar, Sanat K. "Stepup procedures controlling generalized FWER and generalized FDR." The Annals of Statistics (2007): 2405-2420.
  31. Sarkar, Sanat K., and Wenge Guo. "On a generalized false discovery rate." The Annals of Statistics (2009): 1545-1565.
  32. Benjamini, Y. (2010). "Simultaneous and selective inference: Current successes and future challenges". Biometrical Journal 52 (6): 708–721. doi:10.1002/bimj.200900299. PMID 21154895.

External links

This article is issued from Wikipedia - version of the Sunday, May 01, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.