Internal validity

Internal validity is a property of scientific studies which reflects the extent to which a causal conclusion based on a study is warranted. Such warrant is constituted by the extent to which a study minimizes systematic error (or 'bias').

Details

Inferences are said to possess internal validity if a causal relation between two variables is properly demonstrated.[1][2] A causal inference may be based on a relation when three criteria are satisfied:

  1. the "cause" precedes the "effect" in time (temporal precedence),
  2. the "cause" and the "effect" are related (covariation), and
  3. there are no plausible alternative explanations for the observed covariation (nonspuriousness).[2]

In scientific experimental settings, researchers often manipulate a variable (the independent variable) to see what effect it has on a second variable (the dependent variable).[3] For example, a researcher might, for different experimental groups, manipulate the dosage of a particular drug between groups to see what effect it has on health. In this example, the researcher wants to make a causal inference, namely, that different doses of the drug may be held responsible for observed changes or differences. When the researcher may confidently attribute the observed changes or differences in the dependent variable to the independent variable, and when he can rule out other explanations (or rival hypotheses), then his causal inference is said to be internally valid.[4]

In many cases, however, the magnitude of effects found in the dependent variable may not just depend on


Rather, a number of variables or circumstances uncontrolled for (or uncontrollable) may lead to additional or alternative explanations (a) for the effects found and/or (b) for the magnitude of the effects found. Internal validity, therefore, is more a matter of degree than of either-or, and that is exactly why research designs other than true experiments may also yield results with a high degree of internal validity.

In order to allow for inferences with a high degree of internal validity, precautions may be taken during the design of the scientific study. As a rule of thumb, conclusions based on correlations or associations may only allow for lesser degrees of internal validity than conclusions drawn on the basis of direct manipulation of the independent variable. And, when viewed only from the perspective of Internal Validity, highly controlled true experimental designs (i.e. with random selection, random assignment to either the control or experimental groups, reliable instruments, reliable manipulation processes, and safeguards against confounding factors) may be the "gold standard" of scientific research. By contrast, however, the very strategies employed to control these factors may also limit the generalizability or External Validity of the findings.

Factors affecting internal validity

Threats to internal validity

Ambiguous temporal precedence

Lack of clarity about which variable occurred first may yield confusion about which variable is the cause and which is the effect.

Confounding

A major threat to the validity of causal inferences is confounding: Changes in the dependent variable may rather be attributed to the existence or variations in the degree of a third variable which is related to the manipulated variable. Where spurious relationships cannot be ruled out, rival hypotheses to the original causal inference hypothesis of the researcher may be developed.

Selection bias

Selection bias refers to the problem that, at pre-test, differences between groups exist that may interact with the independent variable and thus be 'responsible' for the observed outcome. Researchers and participants bring to the experiment a myriad of characteristics, some learned and others inherent. For example, sex, weight, hair, eye, and skin color, personality, mental capabilities, and physical abilities, but also attitudes like motivation or willingness to participate.

During the selection step of the research study, if an unequal number of test subjects have similar subject-related variables there is a threat to the internal validity. For example, a researcher created two test groups, the experimental and the control groups. The subjects in both groups are not alike with regard to the independent variable but similar in one or more of the subject-related variables.

History

Events outside of the study/experiment or between repeated measures of the dependent variable may affect participants' responses to experimental procedures. Often, these are large scale events (natural disaster, political change, etc.) that affect participants' attitudes and behaviors such that it becomes impossible to determine whether any change on the dependent measures is due to the independent variable, or the historical event.

Maturation

Subjects change during the course of the experiment or even between measurements. For example, young children might mature and their ability to concentrate may change as they grow up. Both permanent changes, such as physical growth and temporary ones like fatigue, provide "natural" alternative explanations; thus, they may change the way a subject would react to the independent variable. So upon completion of the study, the researcher may not be able to determine if the cause of the discrepancy is due to time or the independent variable.

Repeated testing (also referred to as testing effects)

Repeatedly measuring the participants may lead to bias. Participants may remember the correct answers or may be conditioned to know that they are being tested. Repeatedly taking (the same or similar) intelligence tests usually leads to score gains, but instead of concluding that the underlying skills have changed for good, this threat to Internal Validity provides good rival hypotheses.

Instrument change (instrumentality)

The instrument used during the testing process can change the experiment. This also refers to observers being more concentrated or primed, or having unconsciously changed the criteria they use to make judgments. This can also be an issue with self-report measures given at different times. In this case the impact may be mitigated through the use of retrospective pretesting. If any instrumentation changes occur, the internal validity of the main conclusion is affected, as alternative explanations are readily available.

Regression toward the mean

This type of error occurs when subjects are selected on the basis of extreme scores (one far away from the mean) during a test. For example, when children with the worst reading scores are selected to participate in a reading course, improvements at the end of the course might be due to regression toward the mean and not the course's effectiveness. If the children had been tested again before the course started, they would likely have obtained better scores anyway. Likewise, extreme outliers on individual scores are more likely to be captured in one instance of testing but will likely evolve into a more normal distribution with repeated testing.

Mortality/differential attrition

Main article: Survivorship bias

This error occurs if inferences are made on the basis of only those participants that have participated from the start to the end. However, participants may have dropped out of the study before completion, and maybe even due to the study or programme or experiment itself. For example, the percentage of group members having quit smoking at post-test was found much higher in a group having received a quit-smoking training program than in the control group. However, in the experimental group only 60% have completed the program. If this attrition is systematically related to any feature of the study, the administration of the independent variable, the instrumentation, or if dropping out leads to relevant bias between groups, a whole class of alternative explanations is possible that account for the observed differences.

Selection-maturation interaction

This occurs when the subject-related variables, color of hair, skin color, etc., and the time-related variables, age, physical size, etc., interact. If a discrepancy between the two groups occurs between the testing, the discrepancy may be due to the age differences in the age categories.

Diffusion

If treatment effects spread from treatment groups to control groups, a lack of differences between experimental and control groups may be observed. This does not mean, however, that the independent variable has no effect or that there is no relationship between dependent and independent variable.

Compensatory rivalry/resentful demoralization

Behavior in the control groups may alter as a result of the study. For example, control group members may work extra hard to see that expected superiority of the experimental group is not demonstrated. Again, this does not mean that the independent variable produced no effect or that there is no relationship between dependent and independent variable. Vice versa, changes in the dependent variable may only be affected due to a demoralized control group, working less hard or motivated, not due to the independent variable.

Experimenter bias

Experimenter bias occurs when the individuals who are conducting an experiment inadvertently affect the outcome by non-consciously behaving in different ways to members of control and experimental groups. It is possible to eliminate the possibility of experimenter bias through the use of double blind study designs, in which the experimenter is not aware of the condition to which a participant belongs.

For eight of these threats there exists the first letter mnemonic THIS MESS, which refers to the first letters of Testing (repeated testing), History, Instrument change, Statistical Regression toward the mean, Maturation, Experimental mortality, Selection and Selection Interaction.[5]

See also

References

  1. Brewer, M. (2000). Research Design and Issues of Validity. In Reis, H. and Judd, C. (eds.) Handbook of Research Methods in Social and Personality Psychology. Cambridge:Cambridge University Press.
  2. 1 2 Shadish, W., Cook, T., and Campbell, D. (2002). Experimental and Quasi-Experimental Designs for Generilized Causal Inference Boston:Houghton Mifflin.
  3. Levine, G. and Parkinson, S. (1994). Experimental Methods in Psychology. Hillsdale, NJ:Lawrence Erlbaum.
  4. Liebert, R. M. & Liebert, L. L. (1995). Science and behavior: An introduction to methods of psychological research. Englewood Cliffs, NJ: Prentice Hall.
  5. Wortman, P. M. (1983). "Evaluation research – A methodological perspective". Annual Review of Psychology 34: 223–260. doi:10.1146/annurev.ps.34.020183.001255.

External links

This article is issued from Wikipedia - version of the Friday, March 04, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.