University of Cambridge > > The Psychometrics Centre > Scoring open-ended questions

Scoring open-ended questions

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Professor John Rust.

This talk has been canceled/deleted


Scoring open ended questions almost inevitably requires the use of one or more human raters to grade the responses. In the literature, several methods can be found for the measurement of ability from the responses to open ended questions. These methods either oversimplify (ignore the effect of human raters) or overspecify (model the behavior of individual raters). In this presentation, a method is presented that neither oversimplifies nor overspecifies. The method derives from three principles: Random assignment of raters to person-item combinations; If raters agree there is no reason to doubt their judgment; and If a first rater gives a positive (negative) rating, the probability that the second rating is positive (negative) as well is a monotone increasing (decreasing) function of ability. We extensively discuss and motivate these principles and show, in an informal manner, how a simple measurement model can be derived from them. The model is illustrated with real data from the examinations of Dutch as a foreign language.

This talk is part of the The Psychometrics Centre series.

Tell a friend about this talk:

This talk is included in these lists:

This talk is not included in any other list

Note that ex-directory lists are not shown.


© 2006-2022, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity