Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |
Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |
Social Processes: Methodology · Types of test
A criterion-referenced test is one that provides for translating the test score into a statement about the behavior to be expected of a person with that score or their relationship to a specified subject matter. Most tests and quizzes written by school teachers are criterion-referenced tests. The objective is simply to see whether or not the student has learned the material. By contrast, with a norm-referenced test, the translated score tells whether the test-taker did better or worse than other people who took the test. Robert Glaser originally coined both terms.[1]
For example, if the criterion is "Students should be able to correctly add two single-digit numbers," then reasonable test questions might look like "2+3 = ?" or "9+5 = ?" A criterion-referenced test would report the student's performance strictly according to whether or not the individual student correctly answered these questions. A norm-referenced test would report primarily whether this student correctly answered more questions compared to other students in the group.
Even when testing similar topics, a test which is designed to accurately assess mastery may use different questions than one which is intended to show relative ranking. This is because some questions are better at reflecting actual achievement of students, and some test questions are better at differentiating between the best students and the worst students. (Many questions will do both.) A criterion-referenced test will use questions which were correctly answered by students who know the specific material. A norm-referenced test will use questions which were correctly answered by the "best" students and not correctly answered by the "worst" students.
Some tests can provide useful information about both actual achievement and relative ranking. The ACT provides both a ranking, and indication of what level is considered necessary to likely success in college.[2] Some argue that the term "criterion-referenced test" is a misnomer, since it can refer to the interpretation of the score as well as the test itself.[3] In the previous example, the same score on the ACT can be interpreted in a norm-referenced or criterion-referenced manner.
Criterion-referenced testing was a major focus of psychometric research in the 1970s [4].
Definition of criterion[]
A common misunderstanding regarding the term is the meaning of criterion. Many, if not most, criterion-referenced tests involve a cutscore, where the examinee passes if their score exceeds the cutscore and fails if it does not (often called a mastery test). The criterion is not the cutscore; the criterion is the domain of subject matter that the test is designed to assess. For example, the criterion may be "Students should be able to correctly add two single-digit numbers," and the cutscore may be that students should correctly answer a minimum of 80% of the questions to pass.
The criterion-referenced interpretation of a test score identifies the relationship to the subject matter. In the case of a mastery test, this does mean identifying whether the examinee has "mastered" a specified level of the subject matter by comparing their score to the cutscore. However, not all criterion-referenced tests have a cutscore, and the score can simply refer to a person's standing on the subject domain. [5] Again, the ACT is an example of this; there is no cutscore, it simply is an assessment of the student's knowledge of high-school level subject matter.
Because of this common misunderstanding, criterion-referenced tests have also been called standards-based assessments by some education agencies,[6] as students are assessed with regards to standards that define what they "should" know, as defined by the state.[7]
Validating crierion-referenced tests[]
To validate norm-referenced test the standard procedure is to examine correlations with external scales of similar content or to compare the performance of different groups. With criterion-referenced tests it is more important to know how accurate the criterion hit rates are, what Retzlaff & Gibertini 1994 termed the operating characteristics of the test. They argued for this type of test to be validated against five criteria:
- Prevalence
- Sensitivity
- Specificity
- Positive predictive power
- Negative predictive power
Alternative views[]
Many criterion-referenced tests are high-stakes tests, where the results of the test have important implications for the individual examinee. This can also be described as, "you lose a lot if you fail to pass."[8] Examples of this include high school graduation examinations or a Certificate of Initial Mastery, and licensure testing where the test must be passed to work in a profession.
Criterion referenced tests have been referred to as standards-based assessments by some education agencies,[9] where students are assessed with regards to standards that define what they "should" know, as defined by the state.[10] Some tests set a standard that have failed 50 to 80 percent of students at the outset,[11] a higher, not lower failure rate than is possible with standard definition of 50 percent falling below average.
Notes and references[]
- ↑ Glaser, R. (1963). Instructional technology and the measurement of learning outcomes. American Psychologist, 18, 510-522.
- ↑ Cronbach, L. J. (1970). Essentials of psychological testing (3rd ed.). New York: Harper & Row.
- ↑ Haertel, E. (1985). Construct validity and criterion-referenced testing. Review of Educational Research, 55(1), 23-46.
- ↑ Weiss, D.J. and Davison, M.L. (1981). Test Theory and Methods. Annual Review of Psychology, 32,1.
- ↑ [1] QuestionMark Glossary
- ↑ Assessing the Assessment of Outcomes Based Education by Dr Malcolm Venter. Cape Town, South Africa. "OBE advocates a criterion-based system, which means getting rid of the bell curve, phasing out grade point averages and comparative grading".
- ↑ Homeschool World: "The Education Standards Movement Spells Trouble for Private and Home Schools"
- ↑ Homeschool World: "The Education Standards Movement Spells Trouble for Private and Home Schools"
- ↑ Assessing the Assessment of Outcomes Based Education by Dr Malcolm Venter. Cape Town, South Africa. "OBE advocates a criterion-based system, which means getting rid of the bell curve, phasing out grade point averages and comparative grading".
- ↑ Homeschool World: "The Education Standards Movement Spells Trouble for Private and Home Schools"
- ↑ http://www.goldwaterinstitute.org/article.php?/696.html] AIMS 2005: Everyone's Passing, but Is Anyone Learning? by Vicki Murray Goldwater Institute Today's News July 14, 2005 "Every year since 1999, the state has lowered AIMS passing scores or made content easier. Despite those efforts, about 60 percent of high school students taking AIMS for the first time failed in 2002, 2003, and 2004."
See also[]
External links[]
- A webpage about instruction that discusses assessment
This page uses Creative Commons Licensed content from Wikipedia (view authors). |