Likelihood ratios in diagnostic testing

Not to be confused with Likelihood-ratio test.

In evidence-based medicine, likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result usefully changes the probability that a condition (such as a disease state) exists. The first description of the use of likelihood ratios for decision rules was made at a symposium on information theory in 1954.[1] In medicine, likelihood ratios were introduced between 1975 and 1980.[2][3][4]

Calculation

Two versions of the likelihood ratio exist, one for positive and one for negative test results. Respectively, they are known as the positive likelihood ratio (LR+, likelihood ratio positive, likelihood ratio for positive results) and negative likelihood ratio (LR–, likelihood ratio negative, likelihood ratio for negative results).

The positive likelihood ratio is calculated as

 LR+ = \frac{\text{sensitivity}}{1 - \text{specificity}}

which is equivalent to

 LR+ = \frac{\Pr({T+}|D+)}{\Pr({T+}|D-)}

or "the probability of a person who has the disease testing positive divided by the probability of a person who does not have the disease testing positive." Here "T+" or "T" denote that the result of the test is positive or negative, respectively. Likewise, "D+" or "D" denote that the disease is present or absent, respectively. So "true positives" are those that test positive (T+) and have the disease (D+), and "false positives" are those that test positive (T+) but do not have the disease (D).

The negative likelihood ratio is calculated as[5]

 LR- = \frac{1 - \text{sensitivity}}{\text{specificity}}

which is equivalent to[5]

 LR- = \frac{\Pr({T-}|D+)}{\Pr({T-}|D-)}

or "the probability of a person who has the disease testing negative divided by the probability of a person who does not have the disease testing negative."

The calculation of likelihood ratios for tests with continuous values or more than two outcomes is similar to the calculation for dichotomous outcomes; a separate likelihood ratio is simply calculated for every level of test result and is called interval or stratum specific likelihood ratios.[6]

The pretest odds of a particular diagnosis, multiplied by the likelihood ratio, determines the post-test odds. This calculation is based on Bayes' theorem. (Note that odds can be calculated from, and then converted to, probability.)

Application to medicine

A likelihood ratio of greater than 1 indicates the test result is associated with the disease. A likelihood ratio less than 1 indicates that the result is associated with absence of the disease. Tests where the likelihood ratios lie close to 1 have little practical significance as the post-test probability (odds) is little different from the pre-test probability. In summary, the pre-test probability refers to the chance that an individual has a disorder or condition prior to the use of a diagnostic test. It allows the clinician to better interpret the results of the diagnostic test and helps to predict the likelihood of a true positive (T+) result.[7]

Research suggests that physicians rarely make these calculations in practice, however,[8] and when they do, they often make errors.[9] A randomized controlled trial compared how well physicians interpreted diagnostic tests that were presented as either sensitivity and specificity, a likelihood ratio, or an inexact graphic of the likelihood ratio, found no difference between the three modes in interpretation of test results.[10]

Easy Estimation Table

Use this table to estimate how the likelihood ratio changes the probability without needing a calculator.

Likelihood Ratio Approximate* Change

in Probability[11]

Effect on Posttest

Probability of disease[12]

Values between 0 and 1 decrease the probability of disease
0.1 - 45% Large decrease
0.2 - 30% Moderate decrease
0.5 - 15% Slight decrease
1 - 0% None
Values greater than 1 increase the probability of disease
1 + 0% None
2 + 15% Slight increase
5 + 30% Moderate increase
10 + 45% Large increase

*These estimates are accurate to within 10% of the calculated answer for all pretest probabilities between 10% and 90%. The average error is only 4%.

An easy way to recall this is by simply remembering that the three specific LRs—2, 5, and 10—correspond with the first three multiples of 15 (i.e., 15, 30, and 45). An LR of 2 increases probability 15%, one of 5 increases it 30%, and one of 10 increases it 45%. For those LRs between 0 and 1, you can simply invert 2, 5, and 10 (i.e., 1/2 = 0.5, 1/5 = 0.2, 1/10 = 0.1). For any LR in between, the percent change can be estimated.

These estimates are independent of pretest probability and are accurate as long as the pretest probability is between 10% and 90%. For polar extremes of probability >90% and <10%, this usually indicates diagnostic certainty for most clinical problems, making it unnecessary to order further tests (and apply additional LRs).

Bedside Estimation Example

  1. Select your patient population, then determine the pretest probability of the condition. For example, if about 2 out of every 5 patients with abdominal distension have ascites, then the pretest probability is 40%.
  2. Select your test and look up its likelihood ratio. The physical exam finding of bulging flanks has a positive likelihood ratio of 2.0 for ascites.
  3. Estimate change in probability based on the table. A likelihood ratio of 2.0 corresponds to an approximately + 15% increase in probability
  4. Calculate probability of the patient having the disease. Therefore, bulging flanks increases the probability of ascites from 40% to about 55% (i.e., 40 + 15 = 55%, which is within 2% off the exact probability of 57%).

Calculation Example

A medical example is the likelihood that a given test result would be expected in a patient with a certain disorder compared to the likelihood that same result would occur in a patient without the target disorder.

Some sources distinguish between LR+ and LR.[13] A worked example is shown below.

A worked example
A diagnostic test with sensitivity 67% and specificity 91% is applied to 2030 people to look for a disorder with a population prevalence of 1.48%
Patients with bowel cancer
(as confirmed on endoscopy)
Condition positive Condition negative
Fecal
occult
blood

screen
test
outcome
Test
outcome
positive
True positive
(TP) = 20
False positive
(FP) = 180
Positive predictive value
= TP / (TP + FP)
= 20 / (20 + 180)
= 10%
Test
outcome
negative
False negative
(FN) = 10
True negative
(TN) = 1820
Negative predictive value
= TN / (FN + TN)
= 1820 / (10 + 1820)
99.5%
Sensitivity
= TP / (TP + FN)
= 20 / (20 + 10)
67%
Specificity
= TN / (FP + TN)
= 1820 / (180 + 1820)
= 91%

Related calculations

Hence with large numbers of false positives and few false negatives, a positive screen test is in itself poor at confirming the disorder (PPV = 10%) and further investigations must be undertaken; it did, however, correctly identify 66.7% of all cases (the sensitivity). However as a screening test, a negative result is very good at reassuring that a patient does not have the disorder (NPV = 99.5%) and at this initial screen correctly identifies 91% of those who do not have cancer (the specificity).

Confidence intervals for all the predictive parameters involved can be calculated, giving the range of values within which the true value lies at a given confidence level (e.g. 95%).[14]

Estimation of pre- and post-test probability

Further information: Pre- and post-test probability

The likelihood ratio of a test provides a way to estimate the pre- and post-test probabilities of having a condition.

With pre-test probability and likelihood ratio given, then, the post-test probabilities can be calculated by the following three steps:[15]

In equation above, positive post-test probability is calculated using the likelihood ratio positive, and the negative post-test probability is calculated using the likelihood ratio negative.

Alternatively, post-test probability can be calculated directly from the pre-test probability and the likelihood ratio using the equation:

In fact, post-test probability, as estimated from the likelihood ratio and pre-test probability, is generally more accurate than if estimated from the positive predictive value of the test, if the tested individual has a different pre-test probability than what is the prevalence of that condition in the population.

Example

Taking the medical example from above (20 true positives, 10 false negatives, and 2030 total patients), the positive pre-test probability is calculated as:

As demonstrated, the positive post-test probability is numerically equal to the positive predictive value; the negative post-test probability is numerically equal to (1 - negative predictive value).

References

  1. Swets JA. (1973). "The relative operating characteristic in Psychology". Science 182 (14116): 990–1000. doi:10.1126/science.182.4116.990. PMID 17833780.
  2. Pauker SG, Kassirer JP. (1975). "Therapeutic Decision Making: A Cost-Benefit Analysis". NEJM 293 (5): 229–34. doi:10.1056/NEJM197507312930505. PMID 1143303.
  3. Thornbury JR, Fryback DG, Edwards W. (1975). "Likelihood ratios as a measure of the diagnostic usefulness of excretory urogram information.". Radiology 114 (3): 561–5. doi:10.1148/114.3.561. PMID 1118556.
  4. van der Helm HJ, Hische EA. (1979). "Application of Bayes's theorem to results of quantitative clinical chemical determinations.". Clin Chem 25 (6): 985–8. PMID 445835.
  5. 1 2 Gardner, M.; Altman, Douglas G. (2000). Statistics with confidence: confidence intervals and statistical guidelines. London: BMJ Books. ISBN 0-7279-1375-1.
  6. Brown MD, Reeves MJ. (2003). "Evidence-based emergency medicine/skills for evidence-based emergency care. Interval likelihood ratios: another advantage for the evidence-based diagnostician". Ann Emerg Med 42 (2): 292–297. doi:10.1067/mem.2003.274. PMID 12883521.
  7. Harrell F, Califf R, Pryor D, Lee K, Rosati R (1982). "Evaluating the Yield of Medical Tests". JAMA 247 (18): 2543–2546. doi:10.1001/jama.247.18.2543. PMID 7069920.
  8. Reid MC, Lane DA, Feinstein AR (1998). "Academic calculations versus clinical judgments: practicing physicians’ use of quantitative measures of test accuracy". Am. J. Med. 104 (4): 374–80. doi:10.1016/S0002-9343(98)00054-0. PMID 9576412.
  9. Steurer J, Fischer JE, Bachmann LM, Koller M, ter Riet G (2002). "Communicating accuracy of tests to general practitioners: a controlled study". BMJ 324 (7341): 824–6. doi:10.1136/bmj.324.7341.824. PMC 100792. PMID 11934776.
  10. Puhan MA, Steurer J, Bachmann LM, ter Riet G (2005). "A randomized trial of ways to describe test accuracy: the effect on physicians' post-test probability estimates". Ann. Intern. Med. 143 (3): 184–9. doi:10.7326/0003-4819-143-3-200508020-00004. PMID 16061916.
  11. McGee, Steven (2002-08-01). "Simplifying likelihood ratios". Journal of General Internal Medicine 17 (8): 647–650. doi:10.1046/j.1525-1497.2002.10750.x. ISSN 0884-8734. PMC 1495095.
  12. Henderson, Mark C.; Tierney, Lawrence M.; Smetana, Gerald W. (2012). The Patient History (2nd ed.). McGraw-Hill. p. 30. ISBN 978-0-07-162494-7.
  13. "Likelihood ratios". Retrieved 2009-04-04.
  14. Online calculator of confidence intervals for predictive parameters
  15. Likelihood Ratios, from CEBM (Centre for Evidence-Based Medicine). Page last edited: 1 February 2009

External links

Medical Likelihood Ratio Repositories
This article is issued from Wikipedia - version of the Sunday, February 21, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.