False positive rate

In statistics, when performing multiple comparisons, the term false positive ratio, also known as the false alarm ratio, usually refers to the probability of falsely rejecting the null hypothesis for a particular test.

The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio.

Definition

The false positive rate is \frac{FP}{FP + TN},

where FP is number of false positives, and TN is number of true negatives.

The level of significance that is used to test each hypothesis is set based on the form of inference (simultaneous inference vs. selective inference) and its supporting criteria (for example FWER or FDR), that were pre-determined by the researcher.

When performing multiple comparisons in a statistical framework such as above, the false positive ratio (also known as the false alarm ratio, as opposed to false positive rate / false alarm rate ) usually refers to the probability of falsely rejecting the null hypothesis for a particular test. Using the terminology suggested here, it is simply V/m_0.

Since V is a random variable and m_0 is a constant ( V \leq m_0 ), the false positive ratio is also a random variable, ranging between 0-1.
The false positive rate (or "false alarm rate") usually refers to the expectancy of the false positive ratio, expressed by E(V/m_0).

It is worth noticing that the two definitions ("false positive ratio" / "false positive rate") are somewhat interchangeable. For example, in the referenced article[1] V/m_0 serves as the false positive "rate" rather than as its "ratio".

Classification of multiple hypothesis tests

The following table defines various errors committed when testing multiple null hypotheses. Suppose we have a number m of multiple null hypotheses, denoted by: H1, H2, ..., Hm. Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant. Summing the test results over Hi  will give us the following table and related random variables:

Null hypothesis is True (H0) Alternative hypothesis is True (H1) Total
Declared significant V S R
Declared non-significant U T m - R
Total m_0 m - m_0 m

The difference between "false positive rate", "type I error rate" and other close terms

While the false positive rate is mathematically equal to the type I error rate, it is viewed as a separate term for the following reasons:

As opposed to that, the false positive rate is associated with a post-prior result, which is the expected number of false positives divided by the total number of hypotheses under the real combination of true and non-true null hypotheses (disregarding the "global null" hypothesis). Since the false positive rate is a parameter that is not controlled by the researcher, it cannot be identified with the significance level.

The false positive rate should also not be confused with the familywise error rate, which is defined as  \mathrm{FWER} = \Pr(V \ge 1)\,. As the number of tests grows, the familywise error rate usually converges to 1 while the false positive rate remains fixed.

Lastly, it is important to note the profound difference between the false positive rate and the false discovery rate: while the first is defined as E(V/m_0), the second is defined as E(V/R).

See also

References

  1. Burke, Donald; Brundage, John; Redfield, Robert (1988). "Measurement of the False Positive Rate in a Screening Program for Human Immunodeficiency Virus Infections". The New England Journal of Medicine 319: 961–964. doi:10.1056/NEJM198810133191501. PMID 3419477.
This article is issued from Wikipedia - version of the Saturday, April 30, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.