Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Behav Res Methods ; 56(2): 750-764, 2024 Feb.
Article in English | MEDLINE | ID: mdl-36814007

ABSTRACT

Mediation analysis in repeated measures studies can shed light on the mechanisms through which experimental manipulations change the outcome variable. However, the literature on interval estimation for the indirect effect in the 1-1-1 single mediator model is sparse. Most simulation studies to date evaluating mediation analysis in multilevel data considered scenarios that do not match the expected numbers of level 1 and level 2 units typically encountered in experimental studies, and no study to date has compared resampling and Bayesian methods for constructing intervals for the indirect effect in this context. We conducted a simulation study to compare statistical properties of interval estimates of the indirect effect obtained using four bootstrap and two Bayesian methods in the 1-1-1 mediation model with and without random effects. Bayesian credibility intervals had coverage closest to the nominal value and no instances of excessive Type I error rates, but lower power than resampling methods. Findings indicated that the pattern of performance for resampling methods often depended on the presence of random effects. We provide suggestions for selecting an interval estimator for the indirect effect depending on the most important statistical property for a given study, as well as code in R for implementing all methods evaluated in the simulation study. Findings and code from this project will hopefully support the use of mediation analysis in experimental research with repeated measures.


Subject(s)
Mediation Analysis , Models, Statistical , Humans , Bayes Theorem , Computer Simulation , Multilevel Analysis
2.
Qual Life Res ; 32(11): 3247-3255, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37420022

ABSTRACT

PURPOSE: Much research is still needed to compare traditional latent variable models such as confirmatory factor analysis (CFA) to emerging psychometric models such as the Gaussian graphical model (GGM). Previous comparisons of GGM centrality indices with factor loadings from CFA have discovered redundancies, and investigations into how well a GGM-based alternative to exploratory factor analysis (i.e., exploratory graph analysis, or EGA) is able to recover the hypothesized factor structure show mixed results. Importantly, such comparisons have not typically been examined in real mental and physical health symptom data, despite such data being an excellent candidate for the GGM. Our goal was to extend previous work by comparing the GGM and CFA using data from Wave 1 of the Patient Reported Outcomes Measurement Information System (PROMIS). METHODS: Models were fit to PROMIS data based on 16 test forms designed to measure 9 mental and physical health domains. Our analyses borrowed a two-stage approach for handling missing data from the structural equation modeling literature. RESULTS: We found weaker correspondence between centrality indices and factor loadings than found by previous research, but in a similar pattern of correspondence. EGA recommended a factor structure discrepant with PROMIS domains in most cases yet may be taken to provide substantive insight into the dimensionality of PROMIS domains. CONCLUSION: In real mental and physical health data, the GGM and EGA may provide complementary information to traditional CFA metrics.


Subject(s)
Motivation , Quality of Life , Humans , Quality of Life/psychology , Psychometrics/methods , Factor Analysis, Statistical , Surveys and Questionnaires
3.
Behav Res Methods ; 2023 Nov 20.
Article in English | MEDLINE | ID: mdl-37985637

ABSTRACT

To detect bots in online survey data, there is a wealth of literature on statistical detection using only responses to Likert-type items. There are two traditions in the literature. One tradition requires labeled data, forgoing strong model assumptions. The other tradition requires a measurement model, forgoing collection of labeled data. In the present article, we consider the problem where neither requirement is available, for an inventory that has the same number of Likert-type categories for all items. We propose a bot detection algorithm that is both model-agnostic and unsupervised. Our proposed algorithm involves a permutation test with leave-one-out calculations of outlier statistics. For each respondent, it outputs a p value for the null hypothesis that the respondent is a bot. Such an algorithm offers nominal sensitivity calibration that is robust to the bot response distribution. In a simulation study, we found our proposed algorithm to improve upon naive alternatives in terms of 95% sensitivity calibration and, in many scenarios, in terms of classification accuracy.

4.
Qual Life Res ; 31(1): 37-47, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34043132

ABSTRACT

PURPOSE: In developing item banks for patient reported outcomes (PROs), nonparametric techniques are often used for investigating empirical item response curves, whereas final banks usually use parsimonious parametric models. A flexible approach based on monotonic polynomials (MP) provides a compromise by modeling items with both complex and simpler response curves. This paper investigates the suitability of MPs to PRO data. METHOD: Using PROMIS Wave 1 data (N = 15,725) for Physical Function, we fitted an MP model and the graded response model (GRM). We compared both models in terms of overall model fit, latent trait estimates, and item/test information. We quantified possible GRM item misfit using approaches that compute discrepancies with the MP. Through simulations, we investigated the ability of the MP to perform well versus the GRM under identical data collection conditions. RESULTS: A likelihood ratio test (p < 0.001) and AIC (but not BIC) indicated better fit for the MP. Latent trait estimates and expected test scores were comparable between models, but we observed higher information for the MP in the lower range of physical functioning. Many items were flagged as possibly misfitting and simulations supported the performance of the MP. Yet discrepancies between the MP and GRM were small. CONCLUSION: The MP approach allows inclusion of items with complex response curves into PRO item banks. Information for the physical functioning item bank may be greater than originally thought for low levels of physical functioning. This may translate into small improvements if an MP approach is used.


Subject(s)
Patient Reported Outcome Measures , Quality of Life , Algorithms , Data Collection , Humans , Quality of Life/psychology , Translating
5.
Multivariate Behav Res ; 56(4): 687-702, 2021.
Article in English | MEDLINE | ID: mdl-33103932

ABSTRACT

An increased use of models for measuring response styles is apparent in recent years with the multidimensional nominal response model (MNRM) as one prominent example. Inclusion of latent constructs representing extreme (ERS) or midpoint response style (MRS) often improves model fit according to information criteria. However, a test of absolute model fit is often not reported even though it could comprise an important piece of validity evidence. Limited information test statistics are candidates for this task, including the full (M2), ordinal (M2*), and mixed (C2) statistics, which differ in whether additional collapsing of univariate or bivariate contingency tables is conducted. Such collapsing makes sense when item categories are ordinal, which may not hold under the MNRM. More generally, limited information test statistics have gone unevaluated under nominal data and non-ordinal latent trait models. We present a simulation study evaluating the performance of M2, M2*, and C2 with the MNRM. Manipulated conditions included sample size, presence and type of response style, and strength of item slopes on substantive and style dimensions. We found that M2 sometimes had inflated Type I error rates, M2* always had little power, and C2 lacked power under some conditions. M2 and C2 may provide complementary and valuable information regarding model fit.


Subject(s)
Sample Size , Computer Simulation
6.
Pers Soc Psychol Rev ; 19(2): 177-98, 2015 May.
Article in English | MEDLINE | ID: mdl-25063044

ABSTRACT

Implicit self-esteem (ISE), which is often defined as automatic self-evaluations, fuses research on unconscious processes with that on self-esteem. As ISE is viewed as immune to explicit control, it affords the testing of theoretical questions such as whether cultures vary in self-enhancement motivations. We provide a critical review and integration of the work on (a) the operationalization of ISE and (b) possible cultural variation in self-enhancement motivations. Although ISE measures do not often vary across cultures, recent meta-analyses and empirical studies question the validity of the most common way of defining ISE. We revive an alternative conceptualization that defines ISE in terms of how positively people evaluate objects that reflect upon themselves. This conceptualization suggests that ISE research should target alternative phenomena (e.g., minimal group effect, similarity-attraction effect, endowment effect) and it allows for a host of previous cross-cultural findings to bear on the question of cultural variability in ISE.


Subject(s)
Culture , Self Concept , Self-Assessment , Cross-Cultural Comparison , Ethnopsychology , Humans
7.
J Pers ; 83(1): 56-68, 2015 Feb.
Article in English | MEDLINE | ID: mdl-24299075

ABSTRACT

OBJECTIVE: Our research utilized two popular theoretical conceptualizations of implicit self-esteem: 1) implicit self-esteem as a global automatic reaction to the self; and 2) implicit self-esteem as a context/domain specific construct. Under this framework, we present an extensive search for implicit self-esteem measure validity among different cultural groups (Study 1) and under several experimental manipulations (Study 2). METHOD: In Study 1, Euro-Canadians (N = 107), Asian-Canadians (N = 187), and Japanese (N = 112) completed a battery of implicit self-esteem, explicit self-esteem, and criterion measures. Included implicit self-esteem measures were either popular or provided methodological improvements upon older methods. Criterion measures were sampled from previous research on implicit self-esteem and included self-report and independent ratings. In Study 2, Americans (N = 582) completed a shorter battery of these same types of measures under either a control condition, an explicit prime meant to activate the self-concept in a particular context, or prime meant to activate self-competence related implicit attitudes. RESULTS: Across both studies, explicit self-esteem measures far outperformed implicit self-esteem measures in all cultural groups and under all experimental manipulations. CONCLUSION: Implicit self-esteem measures are not valid for individual or cross-cultural comparisons. We speculate that individuals may not form implicit associations with the self as an attitudinal object.


Subject(s)
Cross-Cultural Comparison , Personality Tests/standards , Racial Groups/psychology , Self Concept , Adolescent , Adult , Asian People/psychology , British Columbia , Female , Humans , Japan , Male , Middle Aged , Peer Group , Reproducibility of Results , Self Report , Students , United States , Universities , White People/psychology , Young Adult
8.
Multivariate Behav Res ; 49(5): 407-24, 2014.
Article in English | MEDLINE | ID: mdl-26732356

ABSTRACT

Researchers are often advised to write balanced scales (containing an equal number of positively and negatively worded items) when measuring psychological attributes. This practice is recommended to control for acquiescence bias (ACQ). However, little advice has been given on what to do with such data if the researcher subsequently wants to evaluate a 1-factor model for the scale. This article compares 3 approaches for dealing with the presence of ACQ bias, which make different assumptions: an ipsatization approach based on the work of Chan and Bentler (CB; 1993), a confirmatory factor analysis (CFA) approach that includes an ACQ factor with equal loadings (Billiet & McClendon, 2000; Mirowsky & Ross, 1991), and an exploratory factor analysis (EFA) approach with a target rotation (Ferrando, Lorenzo-Seva, & Chico, 2003). We also examine the "do nothing" approach which fits the 1-factor model to the data ignoring the presence of ACQ bias. Our main findings are that the CFA method performs best overall and that it is robust to the violation of its assumptions, the EFA and the CB approaches work well when their assumptions are strictly met, and the "do nothing" approach can be surprisingly robust when the ACQ factor is not very strong.

9.
Educ Psychol Meas ; 84(2): 217-244, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38898878

ABSTRACT

Item response theory (IRT) models are often compared with respect to predictive performance to determine the dimensionality of rating scale data. However, such model comparisons could be biased toward nested-dimensionality IRT models (e.g., the bifactor model) when comparing those models with non-nested-dimensionality IRT models (e.g., a unidimensional or a between-item-dimensionality model). The reason is that, compared with non-nested-dimensionality models, nested-dimensionality models could have a greater propensity to fit data that do not represent a specific dimensional structure. However, it is unclear as to what degree model comparison results are biased toward nested-dimensionality IRT models when the data represent specific dimensional structures and when Bayesian estimation and model comparison indices are used. We conducted a simulation study to add clarity to this issue. We examined the accuracy of four Bayesian predictive performance indices at differentiating among non-nested- and nested-dimensionality IRT models. The deviance information criterion (DIC), a commonly used index to compare Bayesian models, was extremely biased toward nested-dimensionality IRT models, favoring them even when non-nested-dimensionality models were the correct models. The Pareto-smoothed importance sampling approximation of the leave-one-out cross-validation was the least biased, with the Watanabe information criterion and the log-predicted marginal likelihood closely following. The findings demonstrate that nested-dimensionality IRT models are not automatically favored when the data represent specific dimensional structures as long as an appropriate predictive performance index is used.

10.
Article in English | MEDLINE | ID: mdl-39311826

ABSTRACT

Borderline personality disorder (BPD) is highly comorbid with eating disorders (EDs), and comorbid ED-BPD is associated with a worse clinical presentation and treatment outcomes. Understanding how BPD symptoms manifest in the daily lives of those with EDs and predict momentary ED symptoms has important treatment implications. This study: (a) compared the nine BPD symptoms, assessed across 14 days, in individuals with comorbid ED-BPD, only an ED, and no ED; and (b) examined average and momentary relationships between BPD symptoms and specific ED symptoms (i.e., binge eating, purging, restriction, and maladaptive exercise) in women with EDs. Individuals with comorbid ED-BPD (n = 60), only an ED (n = 114), and controls (n = 47) completed 14 days of ecological momentary assessment. All BPD symptoms except affective instability were more common in individuals with comorbid ED-BPD than those with only an ED. Affective instability and paranoia/dissociation had the largest effect sizes, indicating the greatest differences across groups. Individuals with more frequent abandonment avoidance, anger, identity disturbance, paranoia/dissociation, and self-harm over the 14 days engaged in more frequent binge eating, while those with greater emptiness engaged in more frequent restriction and maladaptive exercise. Momentary affective instability predicted an increased likelihood of binge eating, while momentary interpersonal difficulties predicted a decreased likelihood of binge eating, at the next prompt. This study highlights the importance of considering BPD symptoms in the treatment of individuals with EDs to improve their clinical outcomes and quality of life. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

11.
Psychol Methods ; 28(1): 123-136, 2023 Feb.
Article in English | MEDLINE | ID: mdl-34647757

ABSTRACT

Theories can be represented as statistical models for empirical testing. There is a vast literature on model selection and multimodel inference that focuses on how to assess which statistical model, and therefore which theory, best fits the available data. For example, given some data, one can compare models on various information criterion or other fit statistics. However, what these indices fail to capture is the full range of counterfactuals. That is, some models may fit the given data better not because they represent a more correct theory, but simply because these models have more fit propensity-a tendency to fit a wider range of data, even nonsensical data, better. Current approaches fall short in considering the principle of parsimony (Occam's Razor), often equating it with the number of model parameters. Here we offer a toolkit for researchers to better study and understand parsimony through the fit propensity of structural equation models. We provide an R package (ockhamSEM) built on the popular lavaan package. To illustrate the importance of evaluating fit propensity, we use ockhamSEM to investigate the factor structure of the Rosenberg Self-Esteem Scale. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Models, Statistical , Models, Theoretical , Humans
12.
Educ Psychol Meas ; 83(2): 217-239, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36866070

ABSTRACT

Administering Likert-type questionnaires to online samples risks contamination of the data by malicious computer-generated random responses, also known as bots. Although nonresponsivity indices (NRIs) such as person-total correlations or Mahalanobis distance have shown great promise to detect bots, universal cutoff values are elusive. An initial calibration sample constructed via stratified sampling of bots and humans-real or simulated under a measurement model-has been used to empirically choose cutoffs with a high nominal specificity. However, a high-specificity cutoff is less accurate when the target sample has a high contamination rate. In the present article, we propose the supervised classes, unsupervised mixing proportions (SCUMP) algorithm that chooses a cutoff to maximize accuracy. SCUMP uses a Gaussian mixture model to estimate, unsupervised, the contamination rate in the sample of interest. A simulation study found that, in the absence of model misspecification on the bots, our cutoffs maintained accuracy across varying contamination rates.

13.
Pain ; 164(12): 2845-2851, 2023 Dec 01.
Article in English | MEDLINE | ID: mdl-37390365

ABSTRACT

ABSTRACT: Perceived pain can be viewed because of a competition between nociceptive inputs and other competing goals, such as performing a demanding cognitive task. Task performance, however, suffers when cognitively fatigued. We therefore predicted that cognitive fatigue would weaken the pain-reducing effects of performing a concurrent cognitive task, which would indicate a causal link between fatigue and heightened pain sensitivity. In this study, 2 groups of pain-free adults performed cognitive tasks while receiving painful heat stimuli. In 1 group, we induced cognitive fatigue before performing the tasks. We found that fatigue led to more pain and worse performance when the task was demanding, suggesting that fatigue weakens one's ability to distract from pain. These findings show that cognitive fatigue can impair performance on subsequent tasks and that this impairment can lower a person's ability to distract from and reduce their pain.


Subject(s)
Pain , Task Performance and Analysis , Adult , Humans , Pain/etiology , Fatigue/complications , Cognition
14.
Psychol Assess ; 35(3): 257-268, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36455031

ABSTRACT

The International Classification of Diseases (ICD-11) features a new classification of personality disorders (PD), focusing on the severity of PD. Although there are numerous self-report measures that assess PD severity, to date only the Personality Disorder Severity-ICD-11 (PDS-ICD-11) is based on ICD-11's operationalization of PD. Initial results indicated that the PDS-ICD-11 measures a unidimensional construct, but the assumptions made for scoring its bipolar items had not been fully examined. The aim of this study is to fill this gap and investigate the latent structure of the German version of the PDS-ICD-11 using nominal response models (NRM), which allow for testing these assumptions. We applied the PDS-ICD-11 together with other self-report measures in a sample of 1,228 individuals from the general population. NRM indicated an acceptable fit of a unidimensional model, with only few deviations from the theoretically imposed scoring scheme. The total score was sufficiently reliable and correlated meaningfully with other self-report measures of PD severity. Regarding Diagnostic and Statistical Manual of Mental Disorders (DSM-5) and ICD-11 maladaptive trait domains, the total score was found to be most strongly associated with negative affectivity, whereas associations with antagonism and anankastia were small or nonsignificant. We conclude that the proposed scoring scheme of the PDS-ICD-11 items is acceptable, and the examined psychometric properties of the German version largely correspond to the results from the English-language development study. The total score, however, depicts more internalizing than externalizing personality pathology. Future studies should investigate the diagnostic efficiency of the PDS-ICD-11 scale using multiple methods and time points as well as clinical and forensic samples. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
International Classification of Diseases , Problem Behavior , Humans , Personality Disorders/diagnosis , Personality , Diagnostic and Statistical Manual of Mental Disorders , Personality Inventory
15.
Appl Psychol Meas ; 46(4): 321-337, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35601261

ABSTRACT

Recent work on reliability coefficients has largely focused on continuous items, including critiques of Cronbach's alpha. Although two new model-based reliability coefficients have been proposed for dichotomous items (Dimitrov, 2003a,b; Green & Yang, 2009a), these approaches have yet to be compared to each other or other popular estimates of reliability such as omega, alpha, and the greatest lower bound. We seek computational improvements to one of these model-based reliability coefficients and, in addition, conduct initial Monte Carlo simulations to compare coefficients using dichotomous data. Our results suggest that such improvements to the model-based approach are warranted, while model-based approaches were generally superior.

16.
Educ Psychol Meas ; 82(1): 57-75, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34987268

ABSTRACT

Large-scale assessments often use a computer adaptive test (CAT) for selection of items and for scoring respondents. Such tests often assume a parametric form for the relationship between item responses and the underlying construct. Although semi- and nonparametric response functions could be used, there is scant research on their performance in a CAT. In this work, we compare parametric response functions versus those estimated using kernel smoothing and a logistic function of a monotonic polynomial. Monotonic polynomial items can be used with traditional CAT item selection algorithms that use analytical derivatives. We compared these approaches in CAT simulations with a variety of item selection algorithms. Our simulations also varied the features of the calibration and item pool: sample size, the presence of missing data, and the percentage of nonstandard items. In general, the results support the use of semi- and nonparametric item response functions in a CAT.

17.
Pain Rep ; 7(6): e1041, 2022.
Article in English | MEDLINE | ID: mdl-36313962

ABSTRACT

Introduction: Pain captures attention automatically, yet we can inhibit pain when we are motivated to perform other tasks. Previous studies show that engaging in a cognitively demanding task reduces pain compared with a task that is minimally demanding, yet the effects of motivation on this pain-reducing effect remain largely unexplored. Objectives: In this study, we hypothesized that motivating people to engage in a task with high demands would lead to more cognitive resources directed toward the task, thereby amplifying its pain-reducing effects. Methods: On different trials, participants performed an easy (left-right arrow discrimination) or demanding (2-back) cognitive task while receiving nonpainful or painful heat stimuli. In half of the trials, monetary rewards were offered to motivate participants to engage and perform well in the task. Results: Results showed an interaction between task demands and rewards, whereby offering rewards strengthened the pain-reducing effect of a distracting task when demands were high. This effect was reinforced by increased 2-back performance when rewards were offered, indicating that both task demands and motivation are necessary to inhibit pain. Conclusions: When task demands are low, motivation to engage in the task will have little impact on pain because performance cannot further increase. When motivation is low, participants will spend minimal effort to perform well in the task, thus hindering the pain-reducing effects of higher task demands. These findings suggest that the pain-reducing properties of distraction can be optimized by carefully calibrating the demands and motivational value of the task.

18.
J Pers Assess ; 93(5): 445-53, 2011.
Article in English | MEDLINE | ID: mdl-21859284

ABSTRACT

Popular computer programs print 2 versions of Cronbach's alpha: unstandardized alpha, α(Σ), based on the covariance matrix, and standardized alpha, α(R), based on the correlation matrix. Sources that accurately describe the theoretical distinction between the 2 coefficients are lacking, which can lead to the misconception that the differences between α(R) and α(Σ) are unimportant and to the temptation to report the larger coefficient. We explore the relationship between α(R) and α(Σ) and the reliability of the standardized and unstandardized composite under 3 popular measurement models; we clarify the theoretical meaning of each coefficient and conclude that researchers should choose an appropriate reliability coefficient based on theoretical considerations. We also illustrate that α(R) and α(Σ) estimate the reliability of different composite scores, and in most cases cannot be substituted for one another.


Subject(s)
Research Design , Statistics as Topic , Models, Statistical
19.
Psychol Methods ; 26(3): 273-294, 2021 Jun.
Article in English | MEDLINE | ID: mdl-32673042

ABSTRACT

In this article, we propose integrated generalized structured component analysis (IGSCA), which is a general statistical approach for analyzing data with both components and factors in the same model, simultaneously. This approach combines generalized structured component analysis (GSCA) and generalized structured component analysis with measurement errors incorporated (GSCAM) in a unified manner and can estimate both factor- and component-model parameters, including component and factor loadings, component and factor path coefficients, and path coefficients connecting factors and components. We conduct 2 simulation studies to investigate the performance of IGSCA under models with both factors and components. The first simulation study assesses how existing approaches for structural equation modeling and IGSCA recover parameters. This study shows that only consistent partial least squares (PLSc) and IGSCA yield unbiased estimates of all parameters, whereas the other approaches always provided biased estimates of several parameters. As such, we conduct a second, extensive simulation study to evaluate the relative performance of the 2 competitors (PLSc and IGSCA), considering a variety of experimental factors (model specification, sample size, the number of indicators per factor/component, and exogenous factor/component correlation). IGSCA exhibits better performance than PLSc under most conditions. We also present a real data application of IGSCA to the study of genes and their influence on depression. Finally, we discuss the implications and limitations of this approach, and recommendations for future research. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Latent Class Analysis , Computer Simulation , Humans , Least-Squares Analysis , Sample Size
20.
Appl Psychol Meas ; 44(6): 465-481, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32788817

ABSTRACT

We present a monotonic polynomial graded response (GRMP) model that subsumes the unidimensional graded response model for ordered categorical responses and results in flexible category response functions. We suggest improvements in the parameterization of the polynomial underlying similar models, expand upon an underlying response variable derivation of the model, and in lieu of an overall discrimination parameter we propose an index to aid in interpreting the strength of relationship between the latent variable and underlying item responses. In applications, the GRMP is compared to two approaches: (a) a previously developed monotonic polynomial generalized partial credit (GPCMP) model; and (b) logistic and probit variants of the heteroscedastic graded response (HGR) model that we estimate using maximum marginal likelihood with the expectation-maximization algorithm. Results suggest that the GRMP can fit real data better than the GPCMP and the probit variant of the HGR, but is slightly outperformed by the logistic HGR. Two simulation studies compared the ability of the GRMP and logistic HGR to recover category response functions. While the GRMP showed some ability to recover HGR response functions and those based on kernel smoothing, the HGR was more specific in the types of response functions it could recover. In general, the GRMP and HGR make different assumptions regarding the underlying response variables, and can result in different category response function shapes.

SELECTION OF CITATIONS
SEARCH DETAIL