RESUMEN
Polygenic scores (PGSs) have emerged as a standard approach to predict phenotypes from genotype data in a wide array of applications from socio-genomics to personalized medicine. Traditional PGSs assume genotype data to be error-free, ignoring possible errors and uncertainties introduced from genotyping, sequencing, and/or imputation. In this work, we investigate the effects of genotyping error due to low coverage sequencing on PGS estimation. We leverage SNP array and low-coverage whole-genome sequencing data (lcWGS, median coverage 0.04×) of 802 individuals from the Dana-Farber PROFILE cohort to show that PGS error correlates with sequencing depth (p = 1.2 × 10-7). We develop a probabilistic approach that incorporates genotype error in PGS estimation to produce well-calibrated PGS credible intervals and show that the probabilistic approach increases classification accuracy by up to 6% as compared to traditional PGSs that ignore genotyping error. Finally, we use simulations to explore the combined effect of genotyping and effect size errors and their implication on PGS-based risk-stratification. Our results illustrate the importance of considering genotyping error as a source of PGS error especially for cohorts with varying genotyping technologies and/or low-coverage sequencing.
Asunto(s)
Genómica , Polimorfismo de Nucleótido Simple , Incertidumbre , Genotipo , Genómica/métodos , Secuenciación Completa del Genoma , Polimorfismo de Nucleótido Simple/genéticaRESUMEN
Genome-wide association studies (GWASs) have identified thousands of variants for disease risk. These studies have predominantly been conducted in individuals of European ancestries, which raises questions about their transferability to individuals of other ancestries. Of particular interest are admixed populations, usually defined as populations with recent ancestry from two or more continental sources. Admixed genomes contain segments of distinct ancestries that vary in composition across individuals in the population, allowing for the same allele to induce risk for disease on different ancestral backgrounds. This mosaicism raises unique challenges for GWASs in admixed populations, such as the need to correctly adjust for population stratification. In this work we quantify the impact of differences in estimated allelic effect sizes for risk variants between ancestry backgrounds on association statistics. Specifically, while the possibility of estimated allelic effect-size heterogeneity by ancestry (HetLanc) can be modeled when performing a GWAS in admixed populations, the extent of HetLanc needed to overcome the penalty from an additional degree of freedom in the association statistic has not been thoroughly quantified. Using extensive simulations of admixed genotypes and phenotypes, we find that controlling for and conditioning effect sizes on local ancestry can reduce statistical power by up to 72%. This finding is especially pronounced in the presence of allele frequency differentiation. We replicate simulation results using 4,327 African-European admixed genomes from the UK Biobank for 12 traits to find that for most significant SNPs, HetLanc is not large enough for GWASs to benefit from modeling heterogeneity in this way.
Asunto(s)
Genética de Población , Estudio de Asociación del Genoma Completo , Humanos , Estudio de Asociación del Genoma Completo/métodos , Frecuencia de los Genes/genética , Genotipo , Fenotipo , Polimorfismo de Nucleótido Simple/genéticaRESUMEN
Plant health is increasingly threatened by abiotic and biotic stressors linked to anthropogenic global change. These stressors are frequently studied in isolation. However, they might have non-additive (antagonistic or synergistic) interactive effects that affect plant communities in unexpected ways. We conducted a global meta-analysis to summarize existing evidence on the joint effects of climate change (drought and warming) and biotic attack (pathogens) on plant performance. We also investigated the effect of drought and warming on pathogen performance, as this information is crucial for a mechanistic interpretation of potential indirect effects of climate change on plant performance mediated by pathogens. The final databases included 1230 pairwise cases extracted from 117 recently published scientific articles (from 2006) on a global scale. We found that the combined negative effects of drought and pathogens on plant growth were lower than expected based on their main effects, supporting the existence of antagonistic interactions. Thus, the larger the magnitude of the drought, the lower the pathogen capacity to limit plant growth. On the other hand, the combination of warming and pathogens caused larger plant damage than expected, supporting the existence of synergistic interactions. Our results on the effects of drought and warming on pathogens revealed a limitation of their growth rates and abundance in vitro but an improvement under natural conditions, where multiple factors operate across the microbiome. Further research on the impact of climate change on traits explicitly defining the infective ability of pathogens would enhance the assessment of its indirect effects on plants. The evaluated plant and pathogen responses were conditioned by the intensity of drought or warming and by moderator categorical variables defining the pathosystems. Overall, our findings reveal the need to incorporate the joint effect of climatic and biotic components of global change into predictive models of plant performance to identify non-additive interactions.
Asunto(s)
Cambio Climático , Sequías , Plantas , Interacciones Huésped-Patógeno , Desarrollo de la Planta , Enfermedades de las Plantas/microbiología , Plantas/microbiologíaRESUMEN
PURPOSE: The objective of this systematic review was to describe the prevalence and magnitude of response shift effects, for different response shift methods, populations, study designs, and patient-reported outcome measures (PROM)s. METHODS: A literature search was performed in MEDLINE, PSYCINFO, CINAHL, EMBASE, Social Science Citation Index, and Dissertations & Theses Global to identify longitudinal quantitative studies that examined response shift using PROMs, published before 2021. The magnitude of each response shift effect (effect sizes, R-squared or percentage of respondents with response shift) was ascertained based on reported statistical information or as stated in the manuscript. Prevalence and magnitudes of response shift effects were summarized at two levels of analysis (study and effect levels), for recalibration and reprioritization/reconceptualization separately, and for different response shift methods, and population, study design, and PROM characteristics. Analyses were conducted twice: (a) including all studies and samples, and (b) including only unrelated studies and independent samples. RESULTS: Of the 150 included studies, 130 (86.7%) detected response shift effects. Of the 4868 effects investigated, 793 (16.3%) revealed response shift. Effect sizes could be determined for 105 (70.0%) of the studies for a total of 1130 effects, of which 537 (47.5%) resulted in detection of response shift. Whereas effect sizes varied widely, most median recalibration effect sizes (Cohen's d) were between 0.20 and 0.30 and median reprioritization/reconceptualization effect sizes rarely exceeded 0.15, across the characteristics. Similar results were obtained from unrelated studies. CONCLUSION: The results draw attention to the need to focus on understanding variability in response shift results: Who experience response shifts, to what extent, and under which circumstances?
Asunto(s)
Calidad de Vida , Proyectos de Investigación , Humanos , Calidad de Vida/psicología , Medición de Resultados Informados por el PacienteRESUMEN
The replication crisis has taught us to expect small-to-medium effects in psychological research. But this is based on effect sizes calculated over single variables. Mahalanobis D, the multivariate equivalent of Cohen's d, can enable very large group differences to emerge from a collection of small-to-medium effects (here, reanalysing multivariate datasets from synaesthetes and controls). The use of multivariate effect sizes is not a slight of hand but may instead be a truer reflection of the degree of psychological differences between people that has been largely underappreciated.
Asunto(s)
Cognición , Percepción de Color , Humanos , SinestesiaRESUMEN
The paper makes a case that the current discussions on replicability and the abuse of significance testing have overlooked a more general contributor to the untrustworthiness of published empirical evidence, which is the uninformed and recipe-like implementation of statistical modeling and inference. It is argued that this contributes to the untrustworthiness problem in several different ways, including [a] statistical misspecification, [b] unwarranted evidential interpretations of frequentist inference results, and [c] questionable modeling strategies that rely on curve-fitting. What is more, the alternative proposals to replace or modify frequentist testing, including [i] replacing p-values with observed confidence intervals and effects sizes, and [ii] redefining statistical significance, will not address the untrustworthiness of evidence problem since they are equally vulnerable to [a]-[c]. The paper calls for distinguishing between unduly data-dependant 'statistical results', such as a point estimate, a p-value, and accept/reject H0, from 'evidence for or against inferential claims'. The post-data severity (SEV) evaluation of the accept/reject H0 results, converts them into evidence for or against germane inferential claims. These claims can be used to address/elucidate several foundational issues, including (i) statistical vs. substantive significance, (ii) the large n problem, and (iii) the replicability of evidence. Also, the SEV perspective sheds light on the impertinence of the proposed alternatives [i]-[iii], and oppugns [iii] the alleged arbitrariness of framing H0 and H1 which is often exploited to undermine the credibility of frequentist testing.
RESUMEN
BACKGROUND: An appropriate sample size is essential for obtaining a precise and reliable outcome of a study. In machine learning (ML), studies with inadequate samples suffer from overfitting of data and have a lower probability of producing true effects, while the increment in sample size increases the accuracy of prediction but may not cause a significant change after a certain sample size. Existing statistical approaches using standardized mean difference, effect size, and statistical power for determining sample size are potentially biased due to miscalculations or lack of experimental details. This study aims to design criteria for evaluating sample size in ML studies. We examined the average and grand effect sizes and the performance of five ML methods using simulated datasets and three real datasets to derive the criteria for sample size. We systematically increase the sample size, starting from 16, by randomly sampling and examine the impact of sample size on classifiers' performance and both effect sizes. Tenfold cross-validation was used to quantify the accuracy. RESULTS: The results demonstrate that the effect sizes and the classification accuracies increase while the variances in effect sizes shrink with the increment of samples when the datasets have a good discriminative power between two classes. By contrast, indeterminate datasets had poor effect sizes and classification accuracies, which did not improve by increasing sample size in both simulated and real datasets. A good dataset exhibited a significant difference in average and grand effect sizes. We derived two criteria based on the above findings to assess a decided sample size by combining the effect size and the ML accuracy. The sample size is considered suitable when it has appropriate effect sizes (≥ 0.5) and ML accuracy (≥ 80%). After an appropriate sample size, the increment in samples will not benefit as it will not significantly change the effect size and accuracy, thereby resulting in a good cost-benefit ratio. CONCLUSION: We believe that these practical criteria can be used as a reference for both the authors and editors to evaluate whether the selected sample size is adequate for a study.
Asunto(s)
Aprendizaje Automático , Proyectos de Investigación , Tamaño de la Muestra , ProbabilidadRESUMEN
Meta-analysis is a powerful tool in sport and exercise psychology. However, it has a number of pitfalls, and some lead to ill-advised comparisons and overestimation of effects. The impetus for this research note is provided by a recent systematic review of meta-analyses that examined the correlates of sport performance and has fallen foul of some of the pitfalls. Although the systematic review potentially has great value for researchers and practitioners alike, it treats effects from correlational and intervention studies as yielding equivalent information, double-counts multiple studies, and uses an effect size for correlational studies (Cohen's d) that provides an extreme contrast of unclear practical relevance. These issues impact interpretability, bias, and usefulness of the findings. This methodological note explains each pitfall and illustrates use of an appropriate equivalent effect size for correlational studies (Mathur and VanderWeele's d) to help researchers avoid similar issues in future work.
RESUMEN
Researchers can generate bootstrap confidence intervals for some statistics in SPSS using the BOOTSTRAP command. However, this command can only be applied to selected procedures, and only to selected statistics in these procedures. We developed an extension command and prepared some sample syntax files based on existing approaches from the Internet to illustrate how researchers can (a) generate a large number of nonparametric bootstrap samples, (b) do desired analysis on all these samples, and (c) form the bootstrap confidence intervals for selected statistics using the OMS commands. We developed these tools to help researchers apply nonparametric bootstrapping to any statistics for which this method is appropriate, including statistics derived from other statistics, such as standardized effect size measures computed from the t test results. We also discussed how researchers can extend the tools for other statistics and scenarios they encounter.
Asunto(s)
Intervalos de Confianza , Estadística como AsuntoRESUMEN
The a priori calculation of statistical power has become common practice in behavioral and social sciences to calculate the necessary sample size for detecting an expected effect size with a certain probability (i.e., power). In multi-factorial repeated measures ANOVA, these calculations can sometimes be cumbersome, especially for higher-order interactions. For designs that only involve factors with two levels each, the paired t test can be used for power calculations, but some pitfalls need to be avoided. In this tutorial, we provide practical advice on how to express main and interaction effects in repeated measures ANOVA as single difference variables. In particular, we demonstrate how to calculate the effect size Cohen's d of this difference variable either based on means, variances, and covariances of conditions or by transforming [Formula: see text] or [Formula: see text] from the ANOVA framework into d. With the effect size correctly specified, we then show how to use the t test for sample size considerations by means of an empirical example. The relevant R code is provided in an online repository for all example calculations covered in this article.
Asunto(s)
Proyectos de Investigación , Humanos , Tamaño de la Muestra , Probabilidad , Análisis de VarianzaRESUMEN
Multilevel models are used ubiquitously in the social and behavioral sciences and effect sizes are critical for contextualizing results. A general framework of R-squared effect size measures for multilevel models has only recently been developed. Rights and Sterba (2019) distinguished each source of explained variance for each possible kind of outcome variance. Though researchers have long desired a comprehensive and coherent approach to computing R-squared measures for multilevel models, the use of this framework has a steep learning curve. The purpose of this tutorial is to introduce and demonstrate using a new R package - r2mlm - that automates the intensive computations involved in implementing the framework and provides accompanying graphics to visualize all multilevel R-squared measures together. We use accessible illustrations with open data and code to demonstrate how to use and interpret the R package output.
Asunto(s)
Ciencias de la Conducta , Humanos , Análisis MultinivelRESUMEN
The probability of superiority (PS) has been recommended as a simple-to-interpret effect size for comparing two independent samples-there are several methods for computing the PS for this particular study design. However, educational and psychological interventions increasingly occur in clustered data contexts; and a review of the literature returned only one method for computing the PS in such contexts. In this paper, we propose a method for estimating the PS in clustered data contexts. Specifically, the proposal addresses study designs that compare two groups and group membership is determined at the cluster level. A cluster may be: (i) a group of cases with each case measured once, or (ii) a single case with each case measured multiple times, resulting in longitudinal data. The proposal relies on nonparametric point estimates of the PS coupled with cluster-robust variance estimation, such that the proposed approach should remain adequate regardless of the distribution of the response data. Using Monte Carlo simulation, we show the approach to be unbiased for continuous and binary outcomes, while maintaining adequate frequentist properties. Moreover, our proposal performs better than the single extant method we found in the literature. The proposal is simple to implement in commonplace statistical software and we provide accompanying R code. Hence, it is our hope that the method we present helps applied researchers better estimate group differences when comparing two groups and group membership is determined at the cluster level.
Asunto(s)
Proyectos de Investigación , Programas Informáticos , Humanos , Probabilidad , Simulación por Computador , Escolaridad , Análisis por Conglomerados , Método de MontecarloRESUMEN
Despite its central role in revealing the neurobiological mechanisms of behavior, neuroimaging research faces the challenge of producing reliable biomarkers for cognitive processes and clinical outcomes. Statistically significant brain regions, identified by mass univariate statistical models commonly used in neuroimaging studies, explain minimal phenotypic variation, limiting the translational utility of neuroimaging phenotypes. This is potentially due to the observation that behavioral traits are influenced by variations in neuroimaging phenotypes that are globally distributed across the cortex and are therefore not captured by thresholded, statistical parametric maps commonly reported in neuroimaging studies. Here, we developed a novel multivariate prediction method, the Bayesian polyvertex score, that turns a unthresholded statistical parametric map into a summary score that aggregates the many but small effects across the cortex for behavioral prediction. By explicitly assuming a globally distributed effect size pattern and operating on the mass univariate summary statistics, it was able to achieve higher out-of-sample variance explained than mass univariate and popular multivariate methods while still preserving the interpretability of a generative model. Our findings suggest that similar to the polygenicity observed in the field of genetics, the neural basis of complex behaviors may rest in the global patterning of effect size variation of neuroimaging phenotypes, rather than in localized, candidate brain regions and networks.
Asunto(s)
Mapeo Encefálico/métodos , Corteza Cerebral/fisiología , Cognición/fisiología , Modelos Neurológicos , Teorema de Bayes , Humanos , IndividualidadRESUMEN
In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-regression model, even when the exact form of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models are limited to each describing a single type of dependence. Drawing on flexible tools from multilevel and multivariate meta-analysis, this paper describes an expanded range of working models, along with accompanying estimation methods, which offer potential benefits in terms of better capturing the types of data structures that occur in practice and, under some circumstances, improving the efficiency of meta-regression estimates. We describe how the methods can be implemented using existing software (the "metafor" and "clubSandwich" packages for R), illustrate the proposed approach in a meta-analysis of randomized trials on the effects of brief alcohol interventions for adolescents and young adults, and report findings from a simulation study evaluating the performance of the new methods.
Asunto(s)
Análisis Multivariante , Adolescente , Simulación por Computador , Recolección de Datos , HumanosRESUMEN
Current statistical inference methods for task-fMRI suffer from two fundamental limitations. First, the focus is solely on detection of non-zero signal or signal change, a problem that is exacerbated for large scale studies (e.g. UK Biobank, N=40,000+) where the 'null hypothesis fallacy' causes even trivial effects to be determined as significant. Second, for any sample size, widely used cluster inference methods only indicate regions where a null hypothesis can be rejected, without providing any notion of spatial uncertainty about the activation. In this work, we address these issues by developing spatial Confidence Sets (CSs) on clusters found in thresholded Cohen's d effect size images. We produce an upper and lower CS to make confidence statements about brain regions where Cohen's d effect sizes have exceeded and fallen short of a non-zero threshold, respectively. The CSs convey information about the magnitude and reliability of effect sizes that is usually given separately in a t-statistic and effect estimate map. We expand the theory developed in our previous work on CSs for %BOLD change effect maps (Bowring et al., 2019) using recent results from the bootstrapping literature. By assessing the empirical coverage with 2D and 3D Monte Carlo simulations resembling fMRI data, we find our method is accurate in sample sizes as low as N=60. We compute Cohen's d CSs for the Human Connectome Project working memory task-fMRI data, illustrating the brain regions with a reliable Cohen's d response for a given threshold. By comparing the CSs with results obtained from a traditional statistical voxelwise inference, we highlight the improvement in activation localization that can be gained with the Confidence Sets.
Asunto(s)
Encéfalo/diagnóstico por imagen , Conectoma/métodos , Imagen por Resonancia Magnética/métodos , Humanos , Tamaño de la MuestraRESUMEN
The Adolescent Brain Cognitive Development (ABCD) Study is the largest single-cohort prospective longitudinal study of neurodevelopment and children's health in the United States. A cohort of n = 11,880 children aged 9-10 years (and their parents/guardians) were recruited across 22 sites and are being followed with in-person visits on an annual basis for at least 10 years. The study approximates the US population on several key sociodemographic variables, including sex, race, ethnicity, household income, and parental education. Data collected include assessments of health, mental health, substance use, culture and environment and neurocognition, as well as geocoded exposures, structural and functional magnetic resonance imaging (MRI), and whole-genome genotyping. Here, we describe the ABCD Study aims and design, as well as issues surrounding estimation of meaningful associations using its data, including population inferences, hypothesis testing, power and precision, control of covariates, interpretation of associations, and recommended best practices for reproducible research, analytical procedures and reporting of results.
Asunto(s)
Desarrollo del Adolescente , Psicología del Adolescente , Adolescente , Alcoholismo/epidemiología , Encéfalo/anatomía & histología , Encéfalo/crecimiento & desarrollo , Encéfalo/fisiología , Áreas de Influencia de Salud , Niño , Cognición/fisiología , Femenino , Estudios de Seguimiento , Interacción Gen-Ambiente , Humanos , Masculino , Modelos Neurológicos , Modelos Psicológicos , Tamaño de los Órganos , Padres/psicología , Puntaje de Propensión , Estudios Prospectivos , Reproducibilidad de los Resultados , Proyectos de Investigación , Tamaño de la Muestra , Muestreo , Sesgo de Selección , Factores Socioeconómicos , Estados UnidosRESUMEN
BACKGROUND: Attention-deficit/hyperactivity disorder (ADHD) is a prevalent neurodevelopmental disorder. Neuroanatomic heterogeneity limits our understanding of ADHD's etiology. This study aimed to parse heterogeneity of ADHD and to determine whether patient subgroups could be discerned based on subcortical brain volumes. METHODS: Using the large ENIGMA-ADHD Working Group dataset, four subsamples of 993 boys with and without ADHD and to subsamples of 653 adult men, 400 girls, and 447 women were included in analyses. We applied exploratory factor analysis (EFA) to seven subcortical volumes in order to constrain the complexity of the input variables and ensure more stable clustering results. Factor scores derived from the EFA were used to build networks. A community detection (CD) algorithm clustered participants into subgroups based on the networks. RESULTS: Exploratory factor analysis revealed three factors (basal ganglia, limbic system, and thalamus) in boys and men with and without ADHD. Factor structures for girls and women differed from those in males. Given sample size considerations, we concentrated subsequent analyses on males. Male participants could be separated into four communities, of which one was absent in healthy men. Significant case-control differences of subcortical volumes were observed within communities in boys, often with stronger effect sizes compared to the entire sample. As in the entire sample, none were observed in men. Affected men in two of the communities presented comorbidities more frequently than those in other communities. There were no significant differences in ADHD symptom severity, IQ, and medication use between communities in either boys or men. CONCLUSIONS: Our results indicate that neuroanatomic heterogeneity in subcortical volumes exists, irrespective of ADHD diagnosis. Effect sizes of case-control differences appear more pronounced at least in some of the subgroups.
Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Adulto , Trastorno por Déficit de Atención con Hiperactividad/diagnóstico por imagen , Trastorno por Déficit de Atención con Hiperactividad/epidemiología , Encéfalo/diagnóstico por imagen , Estudios de Casos y Controles , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Tálamo/diagnóstico por imagenRESUMEN
BACKGROUND: Survival rates for breast cancer (BC) are increasing, leading to growing interest in treatment-related late-effects. The aim of the present study was to explore late effects using Patient-Reported Outcome Measures in postmenopausal BC survivors in standard follow-up care. The results were compared to age- and gender-matched data from the general Danish population. MATERIAL AND METHODS: Postmenopausal BC survivors in routine follow-up care between April 2016 and February 2018 at the Department of Oncology, Aarhus University Hospital, Denmark were asked to complete the EORTC QLQ-C30 and BR23 questionnaires together with three items on neuropathy, myalgia, and arthralgia from the PRO-CTCAE. Patients were at different time intervals from primary treatment, enabling a cross-sectional study of reported late effects at different time points after primary treatment. The time intervals used in the analysis were year ≤1, 1-2, 2-3, 3-4, 4-5 and 5+. The QLQ-C30 results were compared with reference data from the general Danish female population. Between-group differences are presented as effect sizes (ESs) (Cohen's d). RESULTS: A total of 1089 BC survivors participated. Compared with the reference group, BC survivors reported better global health status 2-3 and 4-5 years after surgery (d = 0.26) and physical functioning 2-3 years after (0.21). Poorer outcomes in BC survivors compared with the reference group were found for cognitive functioning (0-4 and 5+ years), fatigue (0-2 years), insomnia (1-3 years), emotional functioning (3-4 years), and social functioning (≤1 year after surgery) with ESs ranging from 0.20 to 0.41. For the remaining outcomes, no ESs exceeded 0.20. CONCLUSION: Only small to medium ESs were found for better global health and physical functioning and poorer outcomes for cognitive functioning, fatigue, insomnia, emotional functioning, and social functioning in postmenopausal BC survivors, who otherwise reported similar overall health-related quality of life compared with the general Danish female population.
Asunto(s)
Neoplasias de la Mama , Supervivientes de Cáncer , Neoplasias de la Mama/epidemiología , Neoplasias de la Mama/terapia , Estudios Transversales , Femenino , Humanos , Medición de Resultados Informados por el Paciente , Posmenopausia , Calidad de Vida , Encuestas y CuestionariosRESUMEN
Vibration analysis is an active area of research, aimed, among other targets, at an accurate classification of machinery failure modes. The analysis often leads to complex and convoluted signal processing pipeline designs, which are computationally demanding and often cannot be deployed in IoT devices. In the current work, we address this issue by proposing a data-driven methodology that allows optimising and justifying the complexity of the signal processing pipelines. Additionally, aiming to make IoT vibration analysis systems more cost- and computationally efficient, on the example of MAFAULDA vibration dataset, we assess the changes in the failure classification performance at low sampling rates as well as short observation time windows. We find out that a decrease of the sampling rate from 50 kHz to 1 kHz leads to a statistically significant classification performance drop. A statistically significant decrease is also observed for the 0.1 s time window compared to the 5 s one. However, the effect sizes are small to medium, suggesting that in certain settings lower sampling rates and shorter observation windows might be worth using, consequently making the use of the more cost-efficient sensors feasible. The proposed optimisation approach, as well as the statistically supported findings of the study, allow for an efficient design of IoT vibration analysis systems, both in terms of complexity and costs, bringing us one step closer to the widely accessible IoT/Edge-based vibration analysis.
RESUMEN
This study examined associations between child sexual abuse (CSA) survivors' self-definition status (i.e., whether or not survivors self-identified as sexually abused) and multiple measures of psychopathology, self-system functioning, and risk behaviors. We evaluated the hypothesis that survivors with concordant abuse perceptions (i.e., individuals who reported objective CSA and self-defined as sexually abused) would evidence more pronounced adjustment difficulties in young adulthood than survivors with discordant perceptions (i.e., individuals who reported objective CSA but did not self-define as sexually abused). In this large and ethnically diverse college student sample (N = 2,195; 63.8% female, 36.2% male; 83.3% nonwhite), objective experiences of CSA were associated with increased psychopathology, decreased self-system functioning, and increased risk behaviors, but the magnitude of these effects varied by survivors' self-definition status. Relative to their nonmaltreated peers, survivors with concordant abuse perceptions evidenced the largest elevations in psychopathology and risk behaviors, whereas survivors with discordant abuse perceptions evidenced the largest deficits in self-system functioning. These findings indicate that standard screening criteria may misidentify a sizable group of CSA survivors because these individuals do not perceive their experiences as "abuse." Efforts to understand the meaning ascribed to CSA experiences may profitably guide clinical interventions to enhance specific domains of functioning.