When competence tests are administered, subjects frequently omit items. These missing responses pose a threat to correctly estimating the proficiency level. Newer model-based approaches aim to take nonignorable missing data processes into account by incorporating a latent missing propensity into the measurement model. Two assumptions are typically made when using these models: (1) The missing propensity is unidimensional and (2) the missing propensity and the ability are bivariate normally distributed. These assumptions may, however, be violated in real data sets and could, thus, pose a threat to the validity of this approach. The present study focuses on modeling competencies in various domains, using data from a school sample (N = 15,396)...
This study explores the relationship between students’ missing responses on a large-scale assessment...
This paper presents a new two-dimensional Multiple-Choice Model accounting for Omissions (MCMO). Bas...
Examinees differ in how they interact with assessments. In low-stakes large-scale assessments (LSAs)...
Estimation of examinee ability under Item Response Theory is affected by how omitted test items are ...
Item nonresponse in competence tests pose a threat to a valid and reliable competence measurement, e...
A central purpose of the field of educational assessment is to estimate the ability of test takers. ...
Large-scale assessments (LSAs), such as the National Assessment of Educational Progress (NAEP) are l...
Unplanned missing responses are common to surveys and tests including large scale assessments. There...
Tests administered in studies of student achievement often have a certain amount of not-reached item...
This study investigated the effect on examinees ' ability estimate under item response theory (...
Because of response disturbances such as guessing, cheating, or carelessness, item response models o...
Using data from a pilot test of science and math, item difficulties were estimated with a one-parame...
Assessing competencies of students with special educational needs in learning (SEN-L) poses a challe...
Assessing competencies of students with special educational needs in learning (SEN-L) poses a challe...
This thesis centers around practical applications of psychological, educational, and health assessme...
This study explores the relationship between students’ missing responses on a large-scale assessment...
This paper presents a new two-dimensional Multiple-Choice Model accounting for Omissions (MCMO). Bas...
Examinees differ in how they interact with assessments. In low-stakes large-scale assessments (LSAs)...
Estimation of examinee ability under Item Response Theory is affected by how omitted test items are ...
Item nonresponse in competence tests pose a threat to a valid and reliable competence measurement, e...
A central purpose of the field of educational assessment is to estimate the ability of test takers. ...
Large-scale assessments (LSAs), such as the National Assessment of Educational Progress (NAEP) are l...
Unplanned missing responses are common to surveys and tests including large scale assessments. There...
Tests administered in studies of student achievement often have a certain amount of not-reached item...
This study investigated the effect on examinees ' ability estimate under item response theory (...
Because of response disturbances such as guessing, cheating, or carelessness, item response models o...
Using data from a pilot test of science and math, item difficulties were estimated with a one-parame...
Assessing competencies of students with special educational needs in learning (SEN-L) poses a challe...
Assessing competencies of students with special educational needs in learning (SEN-L) poses a challe...
This thesis centers around practical applications of psychological, educational, and health assessme...
This study explores the relationship between students’ missing responses on a large-scale assessment...
This paper presents a new two-dimensional Multiple-Choice Model accounting for Omissions (MCMO). Bas...
Examinees differ in how they interact with assessments. In low-stakes large-scale assessments (LSAs)...