RESUMO
Mobile applications offer a wide range of opportunities for psychological data collection, such as increased ecological validity and greater acceptance by participants compared to traditional laboratory studies. However, app-based psychological data also pose data-analytic challenges because of the complexities introduced by missingness and interdependence of observations. Consequently, researchers must weigh the advantages and disadvantages of app-based data collection to decide on the scientific utility of their proposed app study. For instance, some studies might only be worthwhile if they provide adequate statistical power. However, the complexity of app data forestalls the use of simple analytic formulas to estimate properties such as power. In this paper, we demonstrate how Monte Carlo simulations can be used to investigate the impact of app usage behavior on the utility of app-based psychological data. We introduce a set of questions to guide simulation implementation and showcase how we answered them for the simulation in the context of the guessing game app Who Knows (Rau et al., 2023). Finally, we give a brief overview of the simulation results and the conclusions we have drawn from them for real-world data generation. Our results can serve as an example of how to use a simulation approach for planning real-world app-based data collection.
Assuntos
Simulação por Computador , Aplicativos Móveis , Método de Monte Carlo , Humanos , Aplicativos Móveis/estatística & dados numéricos , Simulação por Computador/estatística & dados numéricos , Coleta de Dados/métodosRESUMO
Longitudinal panel studies are widely used in developmental science to address important research questions on human development across the lifespan. These studies, however, are often challenging to implement. They can be costly, time-consuming, and vulnerable to test-retest effects or high attrition over time. Planned missingness designs (PMDs), in which partial data are intentionally collected from all or some of the participants, are viable solutions to these challenges. This article provides an overview of several PMDs with potential utilities in longitudinal studies, including the multi-form designs, multi-method designs, varying lag designs, accelerated longitudinal designs, and efficient designs for analysis of change. For each of the designs, the basic rationale, design considerations, data analysis, advantages, and limitations are discussed. The article is concluded with some general recommendations to developmental researchers and promising directions for future research.
Assuntos
Projetos de Pesquisa , Humanos , Estudos LongitudinaisRESUMO
Multimethod assessment is recommended as "best practice" in clinical assessment and is often implemented through the combined use of symptom rating scales and structured interviews. While this approach increases confidence in the validity of assessment, it also increases burden, expense, and leads to the accumulation of redundant information. To address this problem, we evaluate the use a planned missingness design within the framework of adult Attention Deficit/Hyperactivity Disorder (ADHD) assessment. In a sample of 169 young adults, we fit a two-method measurement (TMM) model using ADHD symptoms obtained from rating scales and a structured diagnostic interview. Based on an estimated 8:1 differential between the cost of conducting an in-person diagnostic interview vs. completing questionnaires online, we conducted a series of Monte Carlo simulations to determine the utility of combining TMM with a planned missingness design. We find that even when costs are kept constant, statistical power of the TMM/planned missingness design was equal to the power that would have been obtained had nearly twice the number of participants with complete data been recruited. Conversely, costs could be decreased by 20-25%, while maintaining statistical power equivalent to a design with complete data. Our results suggest the TMM design is a promising technique for reducing the cost and burden of diagnostic assessment within research settings.
RESUMO
The Cattell-Horn-Carroll (CHC) taxonomy has been used to classify and describe human cognitive abilities. The ability factors derived from the CHC taxonomy are often assumed to be invariant across multiple populations and intelligence batteries, which is an important assumption for research and assessment. In this study, data from five different test batteries that were collected during separate Kaufman Assessment Battery for Children-Second Edition (KABC-II; Kaufman & Kaufman, 2004) concurrent validity studies were factor-analyzed jointly. Because the KABC-II was administered to everyone in the validity studies, it was used as a reference battery to link the separate test batteries in a "cross-battery" confirmatory factor analysis. Some findings from this analysis were that CHC-based test classifications based on theory and prior research were straightforward and accurate, a first-order Fluid/Novel Reasoning (Gf) factor was equivalent to a second-order g factor, and sample heterogeneity related to SES and sex influenced factor loadings. It was also shown that a reference variable approach, used in studies that incorporate planned missingness into data collection, may be used successfully to analyze data from several test batteries and studies. One implication from these findings is that CHC theory should continue to serve as a useful guide that can be used for intelligence research, assessment, and test development.
Assuntos
Cognição/classificação , Inteligência/classificação , Testes Neuropsicológicos/normas , Psicometria/normas , Adolescente , Criança , Análise Fatorial , Feminino , Humanos , Masculino , Testes Neuropsicológicos/estatística & dados numéricos , Psicometria/instrumentação , Psicometria/estatística & dados numéricos , Reprodutibilidade dos Testes , Escalas de Wechsler/normas , Escalas de Wechsler/estatística & dados numéricosRESUMO
La mayoría de los datos en ciencias sociales y educación presentan valores perdidos debido al abandono del estudio o la ausencia de respuesta. Los métodos para el manejo de datos perdidos han mejorado dramáticamente en los últimos años, y los programas computacionales ofrecen en la actualidad una variedad de opciones sofisticadas. A pesar de la amplia disponibilidad de métodos considerablemente justificados, muchos investigadores e investigadoras siguen confiando en técnicas viejas de imputación que pueden crear análisis sesgados. Este artículo presenta una introducción conceptual a los patrones de datos perdidos. Seguidamente, se introduce el manejo de datos perdidos y el análisis de los mismos con base en los mecanismos modernos del método de máxima verosimilitud con información completa (FIML, siglas en inglés) y la imputación múltiple (IM). Asimismo, se incluye una introducción a los diseños de datos perdidos así como nuevas herramientas computacionales tales como la función Quark y el paquete semTools. Se espera que este artículo incentive el uso de métodos modernos para el análisis de los datos perdidos.
Most of the social and educational data have missing observations due to either attrition or nonresponse. Missing data methodology has improved dramatically in recent years, and popular computer programs as well as software now offer a variety of sophisticated options. Despite the widespread availability of theoretically justified methods, many researchers still rely on old imputation techniques that can create biased analysis. This article provides conceptual introductions to the patterns of missing data. In line with that, this article introduces how to handle and analyze the missing information based on modern mechanisms of full-information maximum likelihood (FIML) and multiple imputation (MI). An introduction about planned missing designs is also included and new computational tools like Quark function, and semTools package are also mentioned. The authors hope that this paper encourages researchers to implement modern methods for analyzing missing data.