Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Ann Neurol ; 75(4): 608-12, 2014 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-24599576

RESUMEN

We followed a patient with manganese transporter deficiency due to homozygous SLC30A10 mutations from age 14 years until his death at age 38 years and present the first postmortem findings of this disorder. The basal ganglia showed neuronal loss, rhodanine-positive deposits, astrocytosis, myelin loss, and spongiosis. SLC30A10 protein was reduced in residual basal ganglia neurons. Depigmentation of the substantia nigra and other brainstem nuclei was present. Manganese content of basal ganglia and liver was increased 16-fold and 9-fold, respectively. Our study provides a pathological foundation for further investigation of central nervous system toxicity secondary to deregulation of manganese metabolism.


Asunto(s)
Ganglios Basales/patología , Proteínas de Transporte de Catión/deficiencia , Proteínas de Transporte de Catión/genética , Manganeso/metabolismo , Enfermedades Metabólicas/patología , Adulto , Proteínas de Transporte de Catión/metabolismo , Humanos , Masculino , Espectroscopía de Fotoelectrones , Cambios Post Mortem , Transportador 8 de Zinc
2.
Psychol Rev ; 131(1): 104-137, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37956061

RESUMEN

Spatial distributional semantic models represent word meanings in a vector space. While able to model many basic semantic tasks, they are limited in many ways, such as their inability to represent multiple kinds of relations in a single semantic space and to directly leverage indirect relations between two lexical representations. To address these limitations, we propose a distributional graphical model that encodes lexical distributional data in a graphical structure and uses spreading activation for determining the plausibility of word sequences. We compare our model to existing spatial and graphical models by systematically varying parameters that contributing to dimensions of theoretical interest in semantic modeling. In order to be certain about what the models should be able to learn, we trained each model on an artificial corpus describing events in an artificial world simulation containing experimentally controlled verb-noun selectional preferences. The task used for model evaluation requires recovering observed selectional preferences and inferring semantically plausible but never observed verb-noun pairs. We show that the distributional graphical model performed better than all other models. Further, we argue that the relative success of this model comes from its improved ability to access the different orders of spatial representations with the spreading activation on the graph, enabling the model to infer the plausibility of noun-verb pairs unobserved in the training data. The model integrates classical ideas of representing semantic knowledge in a graph with spreading activation and more recent trends focused on the extraction of lexical distributional data from large natural language corpora. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Asunto(s)
Lenguaje , Semántica , Humanos , Aprendizaje , Simulación por Computador
3.
Front Psychol ; 9: 133, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29520243

RESUMEN

Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary "deep learning" approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory) to predict word sequences in a 5-million-word corpus of speech directed to children ages 0-3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM) and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA