Validity is one of the most important concepts in survey research, as without validity, you have meaningless results.

Many people's concern with validity is whether or not their their survey or scale has it.

According to the American Psychological Association, validity "...refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores." (Standards for Psychological and Educational Testing, 1985, p. 9).

In other words: your findings need to be appropriate, meaningful and useful... they need to be valid.

Face Validity
An item has face validity if it appears to measure the concept that it claims to measure. If items have face validity, respondents are more likely to respond to them seriously. The assessment of face validity is very informal (and therefore, my favorite...)

In some situations, we may deliberately construct items that do not have face validity... if we want to develop a scale that estimates a someone's tendency to lie or to give responses s/he thinks the researcher is looking for, we use items that appear to measure something else.

  • not what a test actually measures
  • what a test superficially appears to measure
  • important to appear appropriate, serious
  • i.e., arithmetic test for naval recruits - use naval terminology
  • without this - poor cooperation

Content Validity
Content validity determines if the survey items are representative of the topic being measured. As a first step, you must clearly state what you are interested in measuring, then you must judge whether the items are representative of the topic.

We can assess content validity by listing the learning objectives and making sure that a large number are represented on the test and that some are not over represented. A panel of experts can also be formed to assess content validity.

  • does test cover a representative sample of the behavior domain to be measured
  • i.e. used to validate achievement tests
  • how well an individual has mastered a specific skill
  • i.e., test of multiplication ability should contain multiplication
  • often measure aspects of skill, i.e., multiple choice spelling test
  • In A Nutshell:
    1. Define what you are interested in measuring,for example 'mood.'
    2. Choose the specific aspects which require feedback,for example, 'Depression Frequency.'
    3. Judge whether your items relate to the definitions you developed and adequately cover all aspects.

Example: arithmetic ability measure that only includes multiplication, or Maths ability that only includes arithmetic, no algebra. Note that we must have a theoretical or practical idea of what it is we want to measure (the construct).

Example: Being a good psychologist - knowledge of theories, interpersonal performance, ethical, and so on.

Citerion-Related Validation
Criterion-related validation relies on statistical analyses rather than judgments as in content validation. Criterion-related validation involves calculating a 'validity coefficient' by correlating the survey items with another measure (criteria) already known to be related to other aspects of the attribute.

Criterion-Referenced validity concerns the extent to which the current test or scale is associated with some other measure of the same concept.

Estimate the Pearson product moment correlation between two continuous measures. In order to be considered valid a new scale should have correlation of at least .6 with an existing scale.

  • indicate the effectiveness of test in predicting behavior in specified situations
  • performance is checked against a criterion
  • often complex
  • difficult to ensure they cover the scope of the survey
  • often the most difficult part of the process
  • if criterion is not well developed - limited confidence in validity of items.
  • therefore, both the criterion and the items shold be developed carefully.
  • tex: est of mechanical aptitude... job performace as machinist
  • ex: SAT checked against college grades
  • ex: test of depression... others' ratings of mood
  • Example

Concurrent Validation:
After we develop a new scale, we may want to estimate the correlation between the new scale and an accepted existing scale.
  • short time delay between test and criterion
  • hence, "concurrent" time frame
  • often impractical to extend procedures over required time frame
  • i.e., wait until graduate from college
  • so, administer concurrently to group of college grads
  • psychological testing = diagnosis of existing status
  • "Is Rich neurotic?"
Predictive Validation
Predictive validity assesses the extent to which the scale or test predicts future behavior or performance.
  • increased delay between testing and critetion
  • refers to prediction from test to any criterion situation
  • relevant to recruitment of applicants, personnel, student admission
  • utility of training programs
  • screen applicants likely to develop emotional disorders
  • identify those likely to benefit from therapy or technique
  • "Is Deeter likely to become neurotic?"

Example: The Scholastic Aptitude Test (SAT) and Graduate Record Examination (GRE) are examples of tests that are used to predict future performance. The performance is degree of success in college or graduate school.

Construct Validation
Construct validation attempts to understand what is being measured by examining the relationship between constructs (an abstract idea used as an explanatory concept--such as mood or happiness).
  • involves understanding why items are related by examining the underlying concepts.
  • hypothesizing a relationship then collecting data to test the hypotheses.
  • or, extent a test may be said to measure a theoretical construct or trait
  • i.e., intelligence, mechanical ability, anxiety
  • derived from established relationships among behavioral measures
  • "age differentiation"... do scores show change with age?
  • correlations with other tests
  • factor analysis
  • internal consistency - identify "key items"
  • Convergent and Discriminant Validity

Is success in college accurately measured by GPA in college? This claim is contestable as there are arguably other ways you can be a success in college -- excelling in extracurricular activities, in sports, or you may have taken very hard classes and as the result have a lower GPA than someone who took easy classes. In order for there to be construct validity, the researcher must carefully examine assumptions about the concept being measured, and the fitness of the variables to measure the concept.

Theats to Validity