Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Multivariate Behav Res ; 56(1): 157-158, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33289600
2.
Psychol Assess ; 36(6-7): 395-406, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38829349

RESUMEN

This article illustrates novel quantitative methods to estimate classification consistency in machine learning models used for screening measures. Screening measures are used in psychology and medicine to classify individuals into diagnostic classifications. In addition to achieving high accuracy, it is ideal for the screening process to have high classification consistency, which means that respondents would be classified into the same group every time if the assessment was repeated. Although machine learning models are increasingly being used to predict a screening classification based on individual item responses, methods to describe the classification consistency of machine learning models have not yet been developed. This article addresses this gap by describing methods to estimate classification inconsistency in machine learning models arising from two different sources: sampling error during model fitting and measurement error in the item responses. These methods use data resampling techniques such as the bootstrap and Monte Carlo sampling. These methods are illustrated using three empirical examples predicting a health condition/diagnosis from item responses. R code is provided to facilitate the implementation of the methods. This article highlights the importance of considering classification consistency alongside accuracy when studying screening measures and provides the tools and guidance necessary for applied researchers to obtain classification consistency indices in their machine learning research on diagnostic assessments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Aprendizaje Automático , Humanos , Modelos Estadísticos , Tamizaje Masivo
3.
Psychol Addict Behav ; 38(5): 578-590, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38661658

RESUMEN

OBJECTIVE: The theory of aversive transmission posits that children of parents who have an alcohol use disorder (AUD) may abstain or limit their own alcohol use because they believe themselves to be at risk of developing problems with alcohol. The present study examined relationships among parental AUD, perceived parental AUD, perceived risk for AUD, addiction avoidance reasons for limiting alcohol use, and alcohol use using a random intercept cross-lagged panel model. METHOD: Participants (N = 805; 48% female; 28% Latinx) were from a longitudinal study investigating intergenerational transmission of AUD. Parental AUD, perceived parental AUD, perceived risk for AUD, addiction avoidance reasons for limiting alcohol use, and alcohol use (quantity, frequency, and frequency of heavy drinking) were measured every 5 years from late adolescence (Mage = 20) to adulthood (Mage = 32). Random intercept cross-lagged panel models tested whether there were stable between-person relations or time-varying within-person relations among these variables. RESULTS: At the between-person level, perceived parental AUD predicted greater addiction avoidance reasons for limiting alcohol use and greater perceived risk. Those with greater addiction avoidance reasons for limiting alcohol use were less likely to use any alcohol and drank less frequently. Parental AUD was associated with higher levels of alcohol use as well as perceived risk. No consistent cross-lagged paths were found at the within-person level. CONCLUSIONS: Study findings were at the between-person level rather than the within-person level. Future work on aversive transmission is needed to better understand this subgroup of children of parents with AUD. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Consumo de Bebidas Alcohólicas , Alcoholismo , Humanos , Femenino , Masculino , Estudios Longitudinales , Adulto , Adulto Joven , Adolescente , Hijo de Padres Discapacitados/psicología
4.
Assessment ; 30(5): 1640-1650, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-35950321

RESUMEN

When scales or tests are used to make decisions about individuals (e.g., to identify which adults should be assessed for psychiatric disorders), it is crucial that these decisions be accurate and consistent. However, it is not obvious how to assess accuracy and consistency when the scale was administered only once to a given sample and the true condition based on the latent variable is unknown. This article describes a method based on the linear factor model for evaluating the accuracy and consistency of scale-based decisions using data from a single administration of the scale. We illustrate the procedure and provide R code that investigators can use to apply the method in their own data. Finally, in a simulation study, we evaluate how the method performs when applied to discrete (vs. continuous) items, a practice that is common in published literature. The results suggest that the method is generally robust when applied to discrete items.


Asunto(s)
Toma de Decisiones Clínicas , Modelos Lineales , Trastornos Mentales , Escalas de Valoración Psiquiátrica , Adulto , Humanos , Toma de Decisiones Clínicas/métodos , Trastornos Mentales/diagnóstico , Trastornos Mentales/psicología , Método de Montecarlo , Lenguajes de Programación , Escalas de Valoración Psiquiátrica/normas , Valores de Referencia , Sensibilidad y Especificidad , Reproducibilidad de los Resultados
5.
Psychol Methods ; 2023 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-38127571

RESUMEN

For over three decades, methodologists have cautioned against the use of cross-sectional mediation analyses because they yield biased parameter estimates. Yet, cross-sectional mediation models persist in practice and sometimes represent the only analytic option. We propose a sensitivity analysis procedure to encourage a more principled use of cross-sectional mediation analysis, drawing inspiration from Gollob and Reichardt (1987, 1991). The procedure is based on the two-wave longitudinal mediation model and uses phantom variables for the baseline data. After a researcher provides ranges of possible values for cross-lagged, autoregressive, and baseline Y and M correlations among the phantom and observed variables, they can use the sensitivity analysis to identify longitudinal conditions in which conclusions from a cross-sectional model would differ most from a longitudinal model. To support the procedure, we first show that differences in sign and effect size of the b-path occur most often when the cross-sectional effect size of the b-path is small and the cross-lagged and the autoregressive correlations are equal or similar in magnitude. We then apply the procedure to cross-sectional analyses from real studies and compare the sensitivity analysis results to actual results from a longitudinal mediation analysis. While no statistical procedure can replace longitudinal data, these examples demonstrate that the sensitivity analysis can recover the effect that was actually observed in the longitudinal data if provided with the correct input information. Implications of the routine application of sensitivity analysis to temporal bias are discussed. R code for the procedure is provided in the online supplementary materials. (PsycInfo Database Record (c) 2023 APA, all rights reserved).

6.
Struct Equ Modeling ; 30(6): 914-925, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-39027682

RESUMEN

Statistical mediation analysis is used to uncover intermediate variables, known as mediators [ M ], that explain how a treatment [ X ] changes an outcome [ Y ]. Often, researchers examine whether baseline levels of M and Y moderate the effect of X on posttest M or Y . However, there is limited guidance on how to estimate baseline-by-treatment interaction (BTI) effects when M and Y are latent variables, which entails the estimation of latent interaction effects. In this paper, we discuss two general approaches for estimating latent BTI effects in mediation analysis: using structural models or scoring latent variables prior to estimating observed BTIs and correcting for unreliability. We present simulation results describing bias, power, type 1 error rates, and interval coverage of the latent BTIs and mediated effects estimated using these approaches. These methods are also illustrated with an applied example. R and Mplus syntax are provided to facilitate the implementation of these approaches.

7.
Struct Equ Modeling ; 29(6): 908-919, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37041863

RESUMEN

The two-wave mediation model is the most suitable model for examining mediation effects in a randomized intervention and includes measures taken at pretest and posttest. When using self-report measures, the meaning of responses may change for the treatment group over the course of the intervention and result in noninvariance across groups at posttest, a phenomenon referred to as response shift. We investigate how the mediated effect would be impacted by noninvariance when using sum scores (i.e., assuming invariance). In a Monte Carlo simulation study, the magnitude and proportion of items that had noninvariant intercepts, the direction of noninvariance, number of items, effect size of the mediated effect and sample size were varied. Results showed increased Type I and Type II errors due to a biased estimate of the intervention effect on the mediator resulting from noninvariance. Thus, measurement noninvariance could lead to erroneous conclusions about the process underlying the intervention.

8.
Front Psychol ; 12: 709198, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34552531

RESUMEN

Statistical mediation analysis is used to investigate mechanisms through which a randomized intervention causally affects an outcome variable. Mediation analysis is often carried out in a pretest-posttest control group design because it is a common choice for evaluating experimental manipulations in the behavioral and social sciences. There are four different two-wave (i.e., pretest-posttest) mediation models that can be estimated using either linear regression or a Latent Change Score (LCS) specification in Structural Equation Modeling: Analysis of Covariance, difference and residualized change scores, and a cross-sectional model. Linear regression modeling and the LCS specification of the two-wave mediation models provide identical mediated effect estimates but the two modeling approaches differ in their assumptions of model fit. Linear regression modeling assumes each of the four two-wave mediation models fit the data perfectly whereas the LCS specification allows researchers to evaluate the model constraints implied by the difference score, residualized change score, and cross-sectional models via model fit indices. Therefore, the purpose of this paper is to provide a conceptual and statistical comparison of two-wave mediation models. Models were compared on the assumptions they make about time-lags and cross-lagged effects as well as statistically using both standard measures of model fit (χ2, RMSEA, and CFI) and newly proposed T-size measures of model fit for the two-wave mediation models. Overall, the LCS specification makes clear the assumptions that are often implicitly made when fitting two-wave mediation models with regression. In a Monte Carlo simulation, the standard model fit indices and newly proposed T-size measures of model fit generally correctly identified the best fitting two-wave mediation model.

9.
Artículo en Inglés | MEDLINE | ID: mdl-35600065

RESUMEN

Researchers and prevention scientists often develop interventions to target intermediate variables (known as mediators) that are thought to be related to an outcome. When researchers target a mediating construct measured by self-report, the meaning of self-report measure could change from pretest to posttest for the individuals who received the intervention - which is a phenomenon referred to as response shift. As a result, any observed changes on the mediator measure across groups or across time might reflect a combination of true change on the construct and response shift. Although previous studies have focused on identifying the source and type of response shift in measures after an intervention, there has been limited research on how using sum scores in the presence of response shift affects the estimation of mediated effects via statistical mediation analysis, which is critical for explaining how the intervention worked. In this paper, we focus on recalibration response shift, which is a change in internal standards of measurement, which affects how respondents interpret the response scale. We provide background on the theory of response shift and the methodology used to detect response shift (i.e., tests of measurement invariance). Additionally, we use simulated datasets to provide an illustration of how recalibration in the mediator can bias estimates of the mediated effect and also impact type I error and power.

10.
Psychol Assess ; 33(7): 596-609, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33998821

RESUMEN

Screening measures are used in psychology and medicine to identify respondents who are high or low on a construct. Based on the screening, the evaluator assigns respondents to classes corresponding to different courses of action: Make a diagnosis versus reject a diagnosis; provide services versus withhold services; or conduct further assessment versus conclude the assessment process. When measures are used to classify individuals, it is important that the decisions be consistent and equitable across groups. Ideally, if respondents completed the screening measure repeatedly in quick succession, they would be consistently assigned into the same class each time. In addition, the consistency of the classification should be unrelated to the respondents' background characteristics, such as sex, race, or ethnicity (i.e., the measure is free of measurement bias). Reporting estimates of classification consistency is a common practice in educational testing, but there has been limited application of these estimates to screening in psychology and medicine. In this article, we present two procedures based on item response theory that are used (a) to estimate the classification consistency of a screening measure and (b) to evaluate how classification consistency is impacted by measurement bias across respondent groups. We provide R functions to conduct the procedures, illustrate the procedures with real data, and use Monte Carlo simulations to guide their appropriate use. Finally, we discuss how estimates of classification consistency can help assessment specialists make more informed decisions on the use of a screening measure with protected groups (e.g., groups defined by gender, race, or ethnicity). (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Clasificación , Disparidades en Atención de Salud , Tamizaje Masivo/normas , Adulto , Sesgo , Clasificación/métodos , Toma de Decisiones Clínicas/métodos , Simulación por Computador , Femenino , Humanos , Masculino , Tamizaje Masivo/métodos , Modelos Psicológicos , Método de Montecarlo , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados
11.
Int J Behav Dev ; 45(1): 40-50, 2021 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-33758447

RESUMEN

Conducting valid and reliable empirical research in the prevention sciences is an inherently difficult and challenging task. Chief among these is the need to obtain numerical scores of underlying theoretical constructs for use in subsequent analysis. This challenge is further exacerbated by the increasingly common need to consider multiple reporter assessments, particularly when using integrative data analysis to fit models to data that have been pooled across two or more independent samples. The current paper uses both simulated and real data to examine the utility of a recently proposed psychometric model for multiple reporter data called the trifactor model (TFM) in settings that might be commonly found in prevention research. Results suggest that numerical scores obtained using the TFM are superior to more traditional methods, particularly when pooling samples that contribute different reporter perspectives.

12.
Psychol Assess ; 33(9): 803-815, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33900099

RESUMEN

Parenting is a critical mechanism contributing to child and adolescent development and outcomes. The Multidimensional Assessment of Parenting Scale (MAPS) is a new measure that aims to address gaps in the literature on existing self-report parenting measures. Research to date on the MAPS includes essential steps of scale development and validation; however, replicating scale dimensionality and examining differential item functioning (DIF) based on child age and a parent or child gender is a critical next step. The current study included 1,790 mothers and fathers of sons and daughters, spanning childhood to adolescence in the United States. Item response theory (IRT) confirmed initial factor-analytic work revealing positive and negative dimensions; however, the best-fitting multidimensional model included six nested dimensions from the original seven. A few notable items displayed DIF based on child age and parent gender; however, DIF based on child gender had minimal impact on the overall score. Future directions, clinical implications, and recommendations are discussed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Responsabilidad Parental , Padres , Encuestas y Cuestionarios , Adolescente , Niño , Femenino , Humanos , Masculino , Responsabilidad Parental/psicología , Padres/psicología , Teoría Psicológica , Reproducibilidad de los Resultados , Estados Unidos
13.
Clin Psychol Rev ; 78: 101858, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32413679

RESUMEN

Treatment engagement is a primary challenge to the effectiveness of evidence-based treatments for children and adolescents. One solution to this challenge is technology, which has been proposed as an enhancement to or replacement for standard clinic-based, therapist delivered services. This review summarizes the current state of the field regarding technology's promise to enhance engagement. A review of this literature suggests that although the focus of much theoretical consideration, as well as funding priorities, relatively little empirical research has been published on the role of technology as a vehicle to enhance engagement in particular. Moreover, lack of consistency in constructs, design, and measures make it difficult to draw useful comparisons across studies and, in turn, to determine if and what progress has been made toward more definitive conclusions. At this point in the literature, we can say only that we do not yet definitively know if technology does (or does not) enhance engagement in evidence-based treatments for children and adolescents. Recommendations are provided with the hope of more definitively assessing technology's capacity to improve engagement, including more studies explicitly designed to assess this research question, as well as greater consistency across studies in the measurement of and designs used to test engagement.


Asunto(s)
Práctica Clínica Basada en la Evidencia , Intervención basada en la Internet , Trastornos Mentales/terapia , Servicios de Salud Mental , Aceptación de la Atención de Salud , Telemedicina , Terapia Asistida por Computador , Adolescente , Niño , Humanos
14.
Eval Health Prof ; 41(2): 216-245, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29254369

RESUMEN

A wealth of information is currently known about the epidemiology, etiology, and evaluation of drug and alcohol use across the life span. Despite this corpus of knowledge, much has yet to be learned. Many factors conspire to slow the pace of future advances in the field of substance use including the need for long-term longitudinal studies of often hard-to-reach subjects who are reporting rare and episodic behaviors. One promising option that might help move the field forward is integrative data analysis (IDA). IDA is a principled set of methodologies and statistical techniques that allow for the fitting of statistical models to data that have been pooled across multiple, independent samples. IDA offers a myriad of potential advantages including increased power, greater coverage of rare behaviors, more rigorous psychometric assessment of theoretical constructs, accelerated developmental time period under study, and enhanced reproducibility. However, IDA is not without limitations and may not be useful in a given application for a variety of reasons. The goal of this article is to describe the advantages and limitations of IDA in the study of individual development over time, particularly as it relates to trajectories of substance use. An empirical example of the measurement of polysubstance use is presented and this article concludes with recommendations for practice.


Asunto(s)
Análisis de Datos , Modelos Estadísticos , Proyectos de Investigación , Trastornos Relacionados con Sustancias/epidemiología , Trastornos Relacionados con Sustancias/psicología , Adolescente , Conducta del Adolescente , Factores de Edad , Alcoholismo/epidemiología , Alcoholismo/psicología , Análisis Factorial , Humanos , Estudios Longitudinales , Anamnesis , Psicometría , Reproducibilidad de los Resultados , Factores Sexuales , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA