Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Educ Psychol Meas ; 83(6): 1139-1159, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37974658

RESUMEN

The purpose of this study is to introduce a functional approach for modeling unfolding response data. Functional data analysis (FDA) has been used for examining cumulative item response data, but a functional approach has not been systematically used with unfolding response processes. A brief overview of FDA is presented and illustrated within the context of unfolding data. Seven decision parameters are described that can provide a guide to conducting FDA in this context. These decision parameters are illustrated with real data using two scales that are designed to measure attitude toward capital punishment and attitude toward censorship. The analyses suggest that FDA offers a useful set of tools for examining unfolding response processes.

2.
Appl Psychol Meas ; 43(3): 195-210, 2019 May.
Artículo en Inglés | MEDLINE | ID: mdl-31019356

RESUMEN

It is commonly known that respondents exhibit different response styles when responding to Likert-type items. For example, some respondents tend to select the extreme categories (e.g., strongly disagree and strongly agree), whereas some tend to select the middle categories (e.g., disagree, neutral, and agree). Furthermore, some respondents tend to disagree with every item (e.g., strongly disagree and disagree), whereas others tend to agree with every item (e.g., agree and strongly agree). In such cases, fitting standard unfolding item response theory (IRT) models that assume no response style will yield a poor fit and biased parameter estimates. Although there have been attempts to develop dominance IRT models to accommodate the various response styles, such models are usually restricted to a specific response style and cannot be used for unfolding data. In this study, a general unfolding IRT model is proposed that can be combined with a softmax function to accommodate various response styles via scoring functions. The parameters of the new model can be estimated using Bayesian Markov chain Monte Carlo algorithms. An empirical data set is used for demonstration purposes, followed by simulation studies to assess the parameter recovery of the new model, as well as the consequences of ignoring the impact of response styles on parameter estimators by fitting standard unfolding IRT models. The results suggest the new model to exhibit good parameter recovery and seriously biased estimates when the response styles are ignored.

3.
Educ Psychol Meas ; 79(4): 773-795, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32655183

RESUMEN

The purpose of this study is to explore the use of unfolding models for evaluating the quality of ratings obtained in rater-mediated assessments. Two different judgmental processes can be used to conceptualize ratings: impersonal judgments and personal preferences. Impersonal judgments are typically expected in rater-mediated assessments, and these ratings reflect a cumulative response process. However, raters may also be influenced by their personal preferences in providing ratings, and these ratings may reflect a noncumulative or unfolding response process. The goal of rater training in rater-mediated assessments is to stress impersonal judgments represented by scoring rubrics and to minimize the personal preferences that may represent construct-irrelevant variance in the assessment system. In this study, we explore the use of unfolding models as a framework for evaluating the quality of ratings in rater-mediated assessments. Data from a large-scale assessment of writing in the United States are used to illustrate our approach. The results suggest that unfolding models offer a useful way to evaluate rater-mediated assessments in order to initially explore the judgmental processes underlying the ratings. The data also indicate that there are significant relationships between some essay features (e.g., word count, syntactic simplicity, word concreteness, and verb cohesion) and essay orderings based on the personal preferences of raters. The implications of unfolding models for theory and practice in rater-mediated assessments are discussed.

4.
Educ Psychol Meas ; 76(6): 1005-1025, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-29795898

RESUMEN

The number of performance assessments continues to increase around the world, and it is important to explore new methods for evaluating the quality of ratings obtained from raters. This study describes an unfolding model for examining rater accuracy. Accuracy is defined as the difference between observed and expert ratings. Dichotomous accuracy ratings (0 = inaccurate, 1 = accurate) are unfolded into three latent categories: inaccurate below expert ratings, accurate ratings, and inaccurate above expert ratings. The hyperbolic cosine model (HCM) is used to examine dichotomous accuracy ratings from a statewide writing assessment. This study suggests that HCM is a promising approach for examining rater accuracy, and that the HCM can provide a useful interpretive framework for evaluating the quality of ratings obtained within the context of rater-mediated assessments.

5.
Appl Psychol Meas ; 40(7): 517-533, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29881067

RESUMEN

Attitude surveys are widely used in the social sciences. It has been argued that the underlying response process to attitude items may be more aligned with the ideal-point (unfolding) process than with the cumulative (dominance) process, and therefore, unfolding item response theory (IRT) models are more appropriate than dominance IRT models for these surveys. Missing data and don't know (DK) responses are common in attitude surveys, and they may not be ignorable in the likelihood for parameter estimation. Existing unfolding IRT models often treat missing data or DK as missing at random. In this study, a new class of unfolding IRT models for nonignorable missing data and DK were developed, in which the missingness and DK were assumed to measure a hierarchy of latent traits, which may be correlated with the latent attitude that a test intended to measure. The Bayesian approach with Markov chain Monte Carlo methods was used to estimate the parameters of the new models. Simulation studies demonstrated that the parameters were recovered fairly well, and ignoring nonignorable missingness or DK resulted in poor parameter estimates. An empirical example of a religious belief scale about health was given.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA