RESUMEN
To understand psychological data, it is crucial to examine the structure and dimensions of variables. In this study, we examined alternative estimation algorithms to the conventional GLASSO-based exploratory graph analysis (EGA) in network psychometric models to assess the dimensionality structure of the data. The study applied Bayesian conjugate or Jeffreys' priors to estimate the graphical structure and then used the Louvain community detection algorithm to partition and identify groups of nodes, which allowed the detection of the multi- and unidimensional factor structures. Monte Carlo simulations suggested that the two alternative Bayesian estimation algorithms had comparable or better performance when compared with the GLASSO-based EGA and conventional parallel analysis (PA). When estimating the multidimensional factor structure, the analytically based method (i.e., EGA.analytical) showed the best balance between accuracy and mean biased/absolute errors, with the highest accuracy tied with EGA but with the smallest errors. The sampling-based approach (EGA.sampling) yielded higher accuracy and smaller errors than PA; lower accuracy but also lower errors than EGA. Techniques from the two algorithms had more stable performance than EGA and PA across different data conditions. When estimating the unidimensional structure, the PA technique performed the best, followed closely by EGA, and then EGA.analytical and EGA.sampling. Furthermore, the study explored four full Bayesian techniques to assess dimensionality in network psychometrics. The results demonstrated superior performance when using Bayesian hypothesis testing or deriving posterior samples of graph structures under small sample sizes. The study recommends using the EGA.analytical technique as an alternative tool for assessing dimensionality and advocates for the usefulness of the EGA.sampling method as a valuable alternate technique. The findings also indicated encouraging results for extending the regularization-based network modeling EGA method to the Bayesian framework and discussed future directions in this line of work. The study illustrated the practical application of the techniques to two empirical examples in R.
RESUMEN
This study proposes a procedure for substantive dimensionality estimation in the presence of wording effects, the inconsistent response to regular and reversed self-report items. The procedure developed consists of subtracting an approximate estimate of the wording effects variance from the sample correlation matrix and then estimating the substantive dimensionality on the residual correlation matrix. This is achieved by estimating a random intercept factor with unit loadings for all the regular and unrecoded reversed items. The accuracy of the procedure was evaluated through an extensive simulation study that manipulated nine relevant variables and employed the exploratory graph analysis (EGA) and parallel analysis (PA) retention methods. The results indicated that combining the proposed procedure with EGA or PA achieved high accuracy in estimating the substantive latent dimensionality, but that EGA was superior. Additionally, the present findings shed light on the complex ways that wording effects impact the dimensionality estimates when the response bias in the data is ignored. A tutorial on substantive dimensionality estimation with the R package EGAnet is offered, as well as practical guidelines for applied researchers.
Asunto(s)
Psicometría , Psicometría/métodos , Humanos , Análisis Factorial , Autoinforme , Modelos Estadísticos , Simulación por Computador , Interpretación Estadística de DatosRESUMEN
Interest in the psychology of misinformation has exploded in recent years. Despite ample research, to date there is no validated framework to measure misinformation susceptibility. Therefore, we introduce Verification done, a nuanced interpretation schema and assessment tool that simultaneously considers Veracity discernment, and its distinct, measurable abilities (real/fake news detection), and biases (distrust/naïvité-negative/positive judgment bias). We then conduct three studies with seven independent samples (Ntotal = 8504) to show how to develop, validate, and apply the Misinformation Susceptibility Test (MIST). In Study 1 (N = 409) we use a neural network language model to generate items, and use three psychometric methods-factor analysis, item response theory, and exploratory graph analysis-to create the MIST-20 (20 items; completion time < 2 minutes), the MIST-16 (16 items; < 2 minutes), and the MIST-8 (8 items; < 1 minute). In Study 2 (N = 7674) we confirm the internal and predictive validity of the MIST in five national quota samples (US, UK), across 2 years, from three different sampling platforms-Respondi, CloudResearch, and Prolific. We also explore the MIST's nomological net and generate age-, region-, and country-specific norm tables. In Study 3 (N = 421) we demonstrate how the MIST-in conjunction with Verification done-can provide novel insights on existing psychological interventions, thereby advancing theory development. Finally, we outline the versatile implementations of the MIST as a screening tool, covariate, and intervention evaluation framework. As all methods are transparently reported and detailed, this work will allow other researchers to create similar scales or adapt them for any population of interest.
Asunto(s)
Comunicación , Juicio , Humanos , Psicometría/métodos , Lenguaje , Análisis FactorialRESUMEN
Identifying the correct number of factors in multivariate data is fundamental to psychological measurement. Factor analysis has a long tradition in the field, but it has been challenged recently by exploratory graph analysis (EGA), an approach based on network psychometrics. EGA first estimates a network and then applies the Walktrap community detection algorithm. Simulation studies have demonstrated that EGA has comparable or better accuracy for recovering the same number of communities as there are factors in the simulated data than factor analytic methods. Despite EGA's effectiveness, there has yet to be an investigation into whether other sparsity induction methods or community detection algorithms could achieve equivalent or better performance. Furthermore, unidimensional structures are fundamental to psychological measurement yet they have been sparsely studied in simulations using community detection algorithms. In the present study, we performed a Monte Carlo simulation using the zero-order correlation matrix, GLASSO, and two variants of a non-regularized partial correlation sparsity induction methods with several community detection algorithms. We examined the performance of these method-algorithm combinations in both continuous and polytomous data across a variety of conditions. The results indicate that the Fast-greedy, Louvain, and Walktrap algorithms paired with the GLASSO method were consistently among the most accurate and least-biased overall.
Asunto(s)
Algoritmos , Humanos , Método de Montecarlo , Psicometría , Simulación por ComputadorRESUMEN
The accuracy of factor retention methods for structures with one or more general factors, like the ones typically encountered in fields like intelligence, personality, and psychopathology, has often been overlooked in dimensionality research. To address this issue, we compared the performance of several factor retention methods in this context, including a network psychometrics approach developed in this study. For estimating the number of group factors, these methods were the Kaiser criterion, empirical Kaiser criterion, parallel analysis with principal components (PAPCA) or principal axis, and exploratory graph analysis with Louvain clustering (EGALV). We then estimated the number of general factors using the factor scores of the first-order solution suggested by the best two methods, yielding a "second-order" version of PAPCA (PAPCA-FS) and EGALV (EGALV-FS). Additionally, we examined the direct multilevel solution provided by EGALV. All the methods were evaluated in an extensive simulation manipulating nine variables of interest, including population error. The results indicated that EGALV and PAPCA displayed the best overall performance in retrieving the true number of group factors, the former being more sensitive to high cross-loadings, and the latter to weak group factors and small samples. Regarding the estimation of the number of general factors, both PAPCA-FS and EGALV-FS showed a close to perfect accuracy across all the conditions, while EGALV was inaccurate. The methods based on EGA were robust to the conditions most likely to be encountered in practice. Therefore, we highlight the particular usefulness of EGALV (group factors) and EGALV-FS (general factors) for assessing bifactor structures with multiple general factors. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
RESUMEN
The local independence assumption states that variables are unrelated after conditioning on a latent variable. Common problems that arise from violations of this assumption include model misspecification, biased model parameters, and inaccurate estimates of internal structure. These problems are not limited to latent variable models but also apply to network psychometrics. This paper proposes a novel network psychometric approach to detect locally dependent pairs of variables using network modeling and a graph theory measure called weighted topological overlap (wTO). Using simulation, this approach is compared to contemporary local dependence detection methods such as exploratory structural equation modeling with standardized expected parameter change and a recently developed approach using partial correlations and a resampling procedure. Different approaches to determine local dependence using statistical significance and cutoff values are also compared. Continuous, polytomous (5-point Likert scale), and dichotomous (binary) data were generated with skew across a variety of conditions. Our results indicate that cutoff values work better than significance approaches. Overall, the network psychometrics approaches using wTO with graphical least absolute shrinkage and selector operator with extended Bayesian information criterion and wTO with Bayesian Gaussian graphical model were the best performing local dependence detection methods overall.
Asunto(s)
Modelos Estadísticos , Modelos Teóricos , Psicometría/métodos , Teorema de Bayes , Simulación por ComputadorRESUMEN
Trump supporting Twitter posting activity from right-wing Russian trolls active during the 2016 United States presidential election was analyzed at multiple timescales using a recently developed procedure for separating linear and nonlinear components of time series. Trump supporting topics were extracted with DynEGA (Dynamic Exploratory Graph Analysis) and analyzed with Hankel Alternative View of Koopman (HAVOK) procedure. HAVOK is an exploratory and predictive technique that extracts a linear model for the time series and a corresponding nonlinear time series that is used as a forcing term for the linear model. Together, this forced linear model can produce surprisingly accurate reconstructions of nonlinear and chaotic dynamics. Using the R package havok, Russian troll data yielded well-fitting models at several timescales, not producing well-fitting models at others, suggesting that only a few timescales were important for representing the dynamics of the troll factory. We identified system features that were timescale-universal versus timescale-specific. Timescale-universal features included cycles inherent to troll factory governance, which identified their work-day and work-week organization, later confirmed from published insider interviews. Cycles were captured by eigen-vector basis components resembling Fourier modes, rather than Legendre polynomials typical for HAVOK. This may be interpreted as the troll factory having intrinsic dynamics that are highly coupled to nearly stationary cycles. Forcing terms were timescale-specific. They represented external events that precipitated major changes in the time series and aligned with major events during the political campaign. HAVOK models specified interactions between the discovered components allowing to reverse-engineer the operation of Russian troll factory. Steps and decision points in the HAVOK analysis are presented and the results are described in detail.
RESUMEN
Introduction: The visual signals evoked at the retinal ganglion cells are modified and modulated by various synaptic inputs that impinge on lateral geniculate nucleus cells before they are sent to the cortex. The selectivity of geniculate inputs for clustering or forming microcircuits on discrete dendritic segments of geniculate cell types may provide the structural basis for network properties of the geniculate circuitry and differential signal processing through the parallel pathways of vision. In our study, we aimed to reveal the patterns of input selectivity on morphologically discernable relay cell types and interneurons in the mouse lateral geniculate nucleus. Methods: We used two sets of Scanning Blockface Electron Microscopy (SBEM) image stacks and Reconstruct software to manually reconstruct of terminal boutons and dendrite segments. First, using an unbiased terminal sampling (UTS) approach and statistical modeling, we identified the criteria for volume-based sorting of geniculate boutons into their putative origins. Geniculate terminal boutons that were sorted in retinal and non-retinal categories based on previously described mitochondrial morphology, could further be sorted into multiple subpopulations based on their bouton volume distributions. Terminals deemed non-retinal based on the morphological criteria consisted of five distinct subpopulations, including small-sized putative corticothalamic and cholinergic boutons, two medium-sized putative GABAergic inputs, and a large-sized bouton type that contains dark mitochondria. Retinal terminals also consisted of four distinct subpopulations. The cutoff criteria for these subpopulations were then applied to datasets of terminals that synapse on reconstructed dendrite segments of relay cells or interneurons. Results: Using a network analysis approach, we found an almost complete segregation of retinal and cortical terminals on putative X-type cell dendrite segments characterized by grape-like appendages and triads. On these cells, interneuron appendages intermingle with retinal and other medium size terminals to form triads within glomeruli. In contrast, a second, presumed Y-type cell displayed dendrodendritic puncta adherentia and received all terminal types without a selectivity for synapse location; these were not engaged in triads. Furthermore, the contribution of retinal and cortical synapses received by X-, Y- and interneuron dendrites differed such that over 60% of inputs to interneuron dendrites were from the retina, as opposed to 20% and 7% to X- and Y-type cells, respectively. Conclusion: The results underlie differences in network properties of synaptic inputs from distinct origins on geniculate cell types.
RESUMEN
In this article, existing research investigating how school performance relates to cognitive, self-awareness, language, and personality processes is reviewed. We outline the architecture of the mind, involving a general factor, g, that underlies distinct mental processes (i.e., executive, reasoning, language, cognizance, and personality processes). From preschool to adolescence, g shifts from executive to reasoning and cognizance processes; personality also changes, consolidating in adolescence. There are three major trends in the existing literature: (a) All processes are highly predictive of school achievement if measured alone, each accounting for â¼20% of its variance; (b) when measured together, cognitive processes (executive functions and representational awareness in preschool and fluid intelligence after late primary school) dominate as predictors (over â¼50%), drastically absorbing self-concepts and personality dispositions that drop to â¼3%-5%; and (c) predictive power changes according to the processes forming g at successive levels: attention control and representational awareness in preschool (â¼85%); fluid intelligence, language, and working memory in primary school (â¼53%); fluid intelligence, language, self-evaluation, and school-specific self-concepts in secondary school (â¼70%). Stability and plasticity of personality emerge as predictors in secondary school. A theory of educational priorities is proposed, arguing that (a) executive and awareness processes; (b) information management; and (c) reasoning, self-evaluation, and flexibility in knowledge building must dominate in preschool, primary, and secondary school, respectively. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Asunto(s)
Personalidad , Instituciones Académicas , Adolescente , Preescolar , Humanos , Función Ejecutiva , Inteligencia , CogniciónRESUMEN
Attention deficit hyperactivity disorder (ADHD) is a neuropsychiatric disorder interfering with the normal development of the child. The disorder can be screened at school with the Conners Teacher Rating Scale Revised Short (CTRS-R:S). This scale goes beyond the disorder itself and covers a wider construct, that of abnormal child behavior. This can be understood as a complex system of mutually influencing entities. We analyzed a data set of 525 children in French-speaking primary schools from Belgium, and estimated a network structure, as well as to determine the local dependence of items through Unique Variable Analysis. A reduced network was computed including 15 non-locally dependent items. The structural consistency of the network was not affected by redundant items and was structurally sound. The reduction of the number of variables in network studies is important to improve the investigation of network structures as well as better interpret results from inference measures.
Asunto(s)
Trastorno por Déficit de Atención con Hiperactividad , Trastorno por Déficit de Atención con Hiperactividad/diagnóstico , Trastorno por Déficit de Atención con Hiperactividad/psicología , Bélgica , Niño , Docentes , Humanos , Tamizaje Masivo , Instituciones Académicas , Encuestas y CuestionariosRESUMEN
The past few years were marked by increased online offensive strategies perpetrated by state and non-state actors to promote their political agenda, sow discord, and question the legitimacy of democratic institutions in the US and Western Europe. In 2016, the US congress identified a list of Russian state-sponsored Twitter accounts that were used to try to divide voters on a wide range of issues. Previous research used latent Dirichlet allocation (LDA) to estimate latent topics in data extracted from these accounts. However, LDA has characteristics that may limit the effectiveness of its use on data from social media: The number of latent topics must be specified by the user, interpretability of the topics can be difficult to achieve, and it does not model short-term temporal dynamics. In the current paper, we propose a new method to estimate latent topics in texts from social media termed Dynamic Exploratory Graph Analysis (DynEGA). In a Monte Carlo simulation, we compared the ability of DynEGA and LDA to estimate the number of simulated latent topics. The results show that DynEGA is substantially more accurate than several different LDA algorithms when estimating the number of simulated topics. In an applied example, we performed DynEGA on a large dataset with Twitter posts from state-sponsored right- and left-wing trolls during the 2016 US presidential election. DynEGA revealed topics that were pertinent to several consequential events in the election cycle, demonstrating the coordinated effort of trolls capitalizing on current events in the USA. This example demonstrates the potential power of our approach for revealing temporally relevant information from qualitative text data.
Asunto(s)
Medios de Comunicación Sociales , Algoritmos , Animales , Femenino , Humanos , Psicometría , PorcinosRESUMEN
O Exame Nacional do Ensino Médio (ENEM) possui um modelo de medida restrito, caracterizado pelo postulado de que os itens marcadores de cada domínio se vinculam exclusivamente ao seus domínios-alvo. Estudos pregressos sugerem, por meio de evidências indiretas, que esse modelo não seria válido. No entanto, esse postulado ainda não foi diretamente avaliado. Neste estudo, investiga-se esse pressuposto por meio de análises fatoriais dos itens do ENEM de 2011. Dois modelos foram testados. O primeiro, chamado de estrutura simples de Thurstone, representa o modelo de medida do ENEM. O segundo, de cargas cruzadas, refuta esse modelo. O modelo das cargas cruzadas foi o único que apresentou bom ajuste aos dados de acordo com todos os índices empregados. As evidências encontradas são desfavoráveis ao postulado de cargas fatoriais simples do modelo de medida do ENEM, indicando problemas de validade e na qualidade dos escores produzidos. (AU)
The National High School Examination (ENEM) has a restricted measurement model characterized by the assumption that the marker items in each domain are exclusively linked to their target domain. Previous studies suggest, through indirect evidence, that this model may not be valid. However, this postulate has not yet been directly assessed. In this study, this assumption was investigated through factor analysis of the items of the ENEM 2011 edition. Two models were tested. The first, called Thurstone's simple structure, represents the measurement model of the ENEM. The second, of crossed loadings, refute this model. The crossed loadings model was the only one that presented a good fit to the data according to all the indices employed. The evidence found is unfavorable for the assumption of simple factor loadings of the measurement model of the ENEM, indicating issues of validity and in the quality of the scores produced. (AU)
El Examen Nacional de Enseñanza Secundaria (ENEM) tiene un modelo de medición restringido, caracterizado por el postulado de que los ítems marcadores en cada dominio están vinculados exclusivamente a su dominio objetivo. Estudios previos sugieren, a través de evidencia indirecta, que este modelo no sería válido. Sin embargo, este postulado aún no ha sido evaluado directamente. En este estudio se investiga este supuesto a través del análisis factorial de los ítems del ENEM 2011. Se probaron dos modelos. El primero, llamado estructura simple de Thurstone, representa el modelo de medición ENEM. El segundo, de cargas cruzadas, refuta este modelo. El modelo de carga cruzada fue el único que presentó un buen ajuste a los datos de acuerdo con todos los índices empleados. Las evidencias encontradas son desfavorables al supuesto de cargas factoriales simples del modelo de medición del ENEM, lo que indica problemas de validez y en la calidad de las puntuaciones producidas. (AU)
Asunto(s)
Humanos , Femenino , Adolescente , Adulto , Adulto Joven , Estudiantes/psicología , Educación Primaria y Secundaria , Psicometría , Reproducibilidad de los Resultados , Análisis Factorial , Análisis de Clases LatentesRESUMEN
In the 1930s, a group of scientists argued that the empirical concatenation of observable elements was not possible in the human and social sciences and was, thus, not feasible to obtain objective measurements similar to those found in physics. To address this issue, mathematical theories that do not require concatenation were proposed in the 1960s, including the Additive Conjoint Measurement Theory (ACMT). In the same decade, George Rasch developed the simple logistic model for dichotomous data as a probabilistic operationalization of the ACMT. This study investigates the possibility of developing a fundamental measure for the National Exam of Upper Secondary Education (ENEM) that applies Rasch's model to students' performance on the 2011 ENEM exam. The results indicate an adequate model fit, demonstrating the viability of a fundamental measure using ENEM data. Implications are discussed.
Nos anos 1930, um grupo de cientistas argumentou que a concatenação empírica de elementos observáveis não seria possível nas Ciências Humanas e Sociais e por isso era inviável obter medidas verdadeiras nesses campos do conhecimento científico. Para lidar com este problema, foram propostas teorias matemáticas nas quais a concatenação empírica não seria necessária, como a Teoria de Medidas Aditivas Conjuntas (TMAC). No mesmo período, George Rasch desenvolveu o modelo logístico simples para dados dicotômicos, uma operacionalização probabilística da TMAC que viabiliza a análise empírica de pressupostos da medida verdadeira. Em nosso estudo, investigamos o desenvolvimento de uma medida verdadeira para o Exame Nacional do Ensino Médio (ENEM), aplicando o modelo logístico simples em dados referentes à performance dos participantes da edição de 2011 do ENEM. Os resultados indicaram um ajuste adequado do modelo, apontando para a viabilidade da construção de uma medida verdadeira para o ENEM. Implicações são discutidas.
En los años 1930, un grupo de científicos argumentó que la concatenación empírica de elementos observables no sería posible en Ciencias Humanas y Sociales y por consiguiente sería inviable obtener medidas verdaderas similares a las de Física. Para abordar este problema, a partir de los años 1960 se proponían teorías en las cuales la concatenación empírica no es necesaria, como la Teoría de Medidas Aditivas Conjuntas (TMAC). Al mismo período, George Rasch desarrolló el modelo logístico simple para datos dicotómicos, una operacionalización probabilista de la TMAC. Este estudio investigó la posibilidad de desarrollar una medida verdadera para el Examen Nacional de la Secundaria Superior (ENEM), aplicando el modelo logístico simple en los datos referentes a la performance de los participantes en la prueba de 2011 del ENEM. Los resultados indicaron adecuado ajuste del modelo, asi como la viabilidad de una medida verdadera para el ENEM. Implicaciones son discutidas.
Asunto(s)
Humanos , Conocimiento , Educación Primaria y Secundaria , Crecimiento y Desarrollo , Evaluación Educacional , Ciencias Sociales , Estudiantes , Pesos y Medidas , Brasil , Modelos Logísticos , HumanidadesRESUMEN
Este estudo objetivou identificar as propriedades da escala de satisfação no trabalho mais utilizada em amostras brasileiras em termos de sua estrutura, invariância da medida e validade convergente. Participaram da pesquisa 733 trabalhadores (46% mulheres) do setor industrial e terciário de dois estados brasileiros. Além do modelo original de cinco fatores correlacionados, foram comparados três modelos estruturais alternativos (cinco fatores não correlacionados, hierárquico e bifatorial). Os resultados apontam que a estrutura bifatorial com cinco variáveis latentes de primeiro nível mais uma variável geral também de primeiro nível é um modelo estrutural robusto para avaliar a satisfação de trabalhadores brasileiros e foi invariante para todos os grupos testados, tendo a possibilidade de aplicação em amostras com diversidade em termos de tempo de trabalho, grau de escolaridade e sexo. A medida também apresentou correlação positiva de moderada a elevada com outras duas variáveis do comportamento organizacional, confirmando a validade convergente. (AU)
This study aimed to identify the properties of the job satisfaction scale most used in Brazilian samples in terms of its structure, convergent validity and measure invariance. The study involved 733 workers (46% women) from the industrial and tertiary sectors of two Brazilian states. In addition to the original model of five correlated factors, three alternative structural models were compared (five uncorrelated factors, hierarchical and bifactorial). The results show that the bifactorial structure, with five latent variables of the first level plus a general variable also of the first level, is a robust structural model to evaluate the satisfaction of Brazilian workers and was invariant for all groups tested, with the possibility of application in samples with different work times, levels of education and gender. The measure also showed a positive moderate to high correlation with the other two variables of organizational behavior confirming the convergent validity. (AU)
Este estudio tuvo como objetivo identificar las propiedades de la escala de satisfacción laboral más utilizada en muestras brasileñas en términos de su estructura, validez convergente e invarianza de medida. Participaron de la encuesta 733 trabajadores (46% mujeres) de los sectores industrial y terciario de dos provincias brasileñas. Además del modelo original de cinco factores correlacionados, se compararon tres modelos estructurales alternativos (cinco factores no correlacionados, jerárquico y el de dos factores). Los resultados demuestran que, la estructura de dos factores con cinco variables latentes del primer nivel más una variable general, también del primer nivel, es un modelo estructural robusto para evaluar la satisfacción laboral de los trabajadores brasileños y fue invariante para todos los grupos evaluados, con la posibilidad de aplicación en muestras con diferentes tiempos de trabajo, escolaridad y género. La medida también presentó correlación positiva, de moderada a elevada, con otras dos variables del comportamiento organizacional, lo que confirma la validez convergente. (AU)
Asunto(s)
Humanos , Masculino , Femenino , Adulto , Persona de Mediana Edad , Anciano , Compromiso Laboral , Satisfacción en el Trabajo , Factores Socioeconómicos , Análisis FactorialRESUMEN
This paper reports the results from a 3-year follow-up study to measure the long-term efficacy of a cognitive training for healthy older adults and investigates the effects of booster sessions using an entropy-based metric. DESIGN: semi-randomized quasi-experimental controlled design. PARTICIPANTS: 50 older adults, (M = 73.3, SD = 7.77) assigned into experimental (N = 25; Mean age = 73.9; SD = 8.62) and control groups (N = 25; mean age = 72.9; SD = 6.97). INSTRUMENTS: six subtests of WAIS and two episodic memory tasks. PROCEDURES: the participants were assessed on four occasions: after the end of the original intervention, pre-booster sessions (three years after the original intervention), immediately after the booster sessions and three months after the booster sessions. RESULTS: the repeated measures ANOVA showed that two of the cognitive gains reported in the original intervention were also identified in the follow-up: Coding (F(1, 44) = 11.79, MSE = 0.77, p = .001, eta squared = 0.084) and Picture Completion (F(1, 47) = 10.01, MSE = 0.73, p = .003, eta squared = 0.060). After the booster sessions, all variables presented a significant interaction between group and time favorable to the experimental group (moderate to high effect sizes). To compare the level of cohesion of the cognitive variables between the groups, an entropy-based metric was used. The experimental group presented a lower level of cohesion on three of the four measurement occasions, suggesting a differential impact of the intervention with immediate and short-term effects, but without long-term effects.
Asunto(s)
Trastornos del Conocimiento , Terapia Cognitivo-Conductual , Memoria Episódica , Anciano , Cognición , Estudios de Seguimiento , Estado de Salud , HumanosRESUMEN
BACKGROUND: The neuropeptide oxytocin regulates mammalian social behavior. Disruptions in oxytocin signaling are a feature of many psychopathologies. One commonly studied biomarker for oxytocin involvement in psychiatric diseases is DNA methylation at the oxytocin receptor gene (OXTR). Such studies focus on DNA methylation in two regions of OXTR, exon 3 and a region termed MT2 which overlaps exon 1 and intron 1. However, the relative contribution of exon 3 and MT2 in regulating OXTR gene expression in the brain is currently unknown. RESULTS: Here, we use the prairie vole as a translational animal model to investigate genetic, epigenetic, and environmental factors affecting Oxtr gene expression in a region of the brain that has been shown to drive Oxtr related behavior in the vole, the nucleus accumbens. We show that the genetic structure of Oxtr in prairie voles resembles human OXTR. We then studied the effects of early life experience on DNA methylation in two regions of a CpG island surrounding the Oxtr promoter: MT2 and exon 3. We show that early nurture in the form of parental care results in DNA hypomethylation of Oxtr in both MT2 and exon 3, but only DNA methylation in MT2 is associated with Oxtr gene expression. Network analyses indicate that CpG sites in the 3' portion of MT2 are most highly associated with Oxtr gene expression. We also identify two novel SNPs in exon 3 of Oxtr in prairie voles and a novel alternative transcript originating from the third intron of the gene. Expression of the novel alternative transcript is associated with genotype at SNP KLW2. CONCLUSIONS: These results identify putative regulatory features of Oxtr in prairie voles which inform future studies examining OXTR in human social behaviors and disorders. These studies indicate that in prairie voles, DNA methylation in MT2, particularly in the 3' portion, is more predictive of Oxtr gene expression than DNA methylation in exon 3. Similarly, in human temporal cortex, we find that DNA methylation in the 3' portion of MT2 is associated with OXTR expression. Together, these results suggest that among the CpG sites studied, DNA methylation of MT2 may be the most reliable indicator of OXTR gene expression. We also identify novel features of prairie vole Oxtr, including SNPs and an alternative transcript, which further develop the prairie vole as a translational model for studies of OXTR.
Asunto(s)
Arvicolinae/genética , Trastornos Mentales/genética , Metalotioneína/genética , Receptores de Oxitocina/genética , Experiencias Adversas de la Infancia/psicología , Animales , Encéfalo/metabolismo , Islas de CpG/genética , Metilación de ADN , Ambiente , Epigénesis Genética , Exones/genética , Femenino , Expresión Génica , Humanos , Intrones/genética , Masculino , Trastornos Mentales/metabolismo , Modelos Animales , Núcleo Accumbens/metabolismo , Oxitocina/genética , Polimorfismo de Nucleótido Simple/genética , Conducta SocialRESUMEN
Recent research has demonstrated that the network measure node strength or sum of a node's connections is roughly equivalent to confirmatory factor analysis (CFA) loadings. A key finding of this research is that node strength represents a combination of different latent causes. In the present research, we sought to circumvent this issue by formulating a network equivalent of factor loadings, which we call network loadings. In two simulations, we evaluated whether these network loadings could effectively (1) separate the effects of multiple latent causes and (2) estimate the simulated factor loading matrix of factor models. Our findings suggest that the network loadings can effectively do both. In addition, we leveraged the second simulation to derive effect size guidelines for network loadings. In a third simulation, we evaluated the similarities and differences between factor and network loadings when the data were generated from random, factor, and network models. We found sufficient differences between the loadings, which allowed us to develop an algorithm to predict the data generating model called the Loadings Comparison Test (LCT). The LCT had high sensitivity and specificity when predicting the data generating model. In sum, our results suggest that network loadings can provide similar information to factor loadings when the data are generated from a factor model and therefore can be used in a similar way (e.g., item selection, measurement invariance, factor scores).
Asunto(s)
Algoritmos , Simulación por Computador , Análisis Factorial , HumanosRESUMEN
The accurate identification of the content and number of latent factors underlying multivariate data is an important endeavor in many areas of Psychology and related fields. Recently, a new dimensionality assessment technique based on network psychometrics was proposed (Exploratory Graph Analysis, EGA), but a measure to check the fit of the dimensionality structure to the data estimated via EGA is still lacking. Although traditional factor-analytic fit measures are widespread, recent research has identified limitations for their effectiveness in categorical variables. Here, we propose three new fit measures (termed entropy fit indices) that combines information theory, quantum information theory and structural analysis: Entropy Fit Index (EFI), EFI with Von Neumman Entropy (EFI.vn) and Total EFI.vn (TEFI.vn). The first can be estimated in complete datasets using Shannon entropy, while EFI.vn and TEFI.vn can be estimated in correlation matrices using quantum information metrics. We show, through several simulations, that TEFI.vn, EFI.vn and EFI are as accurate or more accurate than traditional fit measures when identifying the number of simulated latent factors. However, in conditions where more factors are extracted than the number of factors simulated, only TEFI.vn presents a very high accuracy. In addition, we provide an applied example that demonstrates how the new fit measures can be used with a real-world dataset, using exploratory graph analysis.
Asunto(s)
Entropía , PsicometríaRESUMEN
About one-third of autistic people have limited ability to use speech. Some have learned to communicate by pointing to letters of the alphabet. But this method is controversial because it requires the assistance of another person-someone who holds a letterboard in front of users and so could theoretically cue them to point to particular letters. Indeed, some scientists have dismissed the possibility that any nonspeaking autistic person who communicates with assistance could be conveying their own thoughts. In the study reported here, we used head-mounted eye-tracking to investigate communicative agency in a sample of nine nonspeaking autistic letterboard users. We measured the speed and accuracy with which they looked at and pointed to letters as they responded to novel questions. Participants pointed to about one letter per second, rarely made spelling errors, and visually fixated most letters about half a second before pointing to them. Additionally, their response times reflected planning and production processes characteristic of fluent spelling in non-autistic typists. These findings render a cueing account of participants' performance unlikely: The speed, accuracy, timing, and visual fixation patterns suggest that participants pointed to letters they selected themselves, not letters they were directed to by the assistant. The blanket dismissal of assisted autistic communication is therefore unwarranted.
Asunto(s)
Trastorno Autístico/fisiopatología , Comunicación , Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Tiempo de Reacción/fisiología , Adolescente , Adulto , Femenino , Humanos , Lenguaje , Masculino , Habla/fisiología , Encuestas y Cuestionarios , Adulto JovenRESUMEN
Exploratory graph analysis (EGA) is a new technique that was recently proposed within the framework of network psychometrics to estimate the number of factors underlying multivariate data. Unlike other methods, EGA produces a visual guide-network plot-that not only indicates the number of dimensions to retain, but also which items cluster together and their level of association. Although previous studies have found EGA to be superior to traditional methods, they are limited in the conditions considered. These issues are addressed through an extensive simulation study that incorporates a wide range of plausible structures that may be found in practice, including continuous and dichotomous data, and unidimensional and multidimensional structures. Additionally, two new EGA techniques are presented: one that extends EGA to also deal with unidimensional structures, and the other based on the triangulated maximally filtered graph approach (EGAtmfg). Both EGA techniques are compared with 5 widely used factor analytic techniques. Overall, EGA and EGAtmfg are found to perform as well as the most accurate traditional method, parallel analysis, and to produce the best large-sample properties of all the methods evaluated. To facilitate the use and application of EGA, we present a straightforward R tutorial on how to apply and interpret EGA, using scores from a well-known psychological instrument: the Marlowe-Crowne Social Desirability Scale. (PsycInfo Database Record (c) 2020 APA, all rights reserved).