Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Cortex ; 176: 144-160, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38795650

RESUMEN

OBJECTIVE: Huntington's Disease (HD) is an inherited neurodegenerative disease caused by the mutation of the Htt gene, impacting all aspects of living and functioning. Among cognitive disabilities, spatial capacities are impaired, but their monitoring remains scarce as limited by lengthy experts' assessments. Language offers an alternative medium to evaluate patients' performance in HD. Yet, its capacities to assess HD's spatial abilities are unknown. Here, we aimed to bring proof-of-concept that HD's spatial deficits can be assessed through speech. METHODS: We developed the Spatial Description Model to graphically represent spatial relations described during the Cookie Theft Picture (CTP) task. We increased the sensitivity of our model by using only sentences with spatial terms, unlike previous studies in Alzheimer's disease. 78 carriers of the mutant Htt, including 56 manifest and 22 premanifest individuals, as well as 25 healthy controls were included from the BIOHD & (NCT01412125) & Repair-HD (NCT03119246) cohorts. The convergence and divergence of the model were validated using the SelfCog battery. RESULTS: Our Spatial Description Model was the only one among the four assessed approaches, revealing that individuals with manifest HD expressed fewer spatial relations and engaged in less spatial exploration compared to healthy controls. Their graphs correlated with both visuospatial and language SelfCog performances, but not with motor, executive nor memory functions. CONCLUSIONS: We provide the proof-of-concept using our Spatial Description Model that language can grasp HD patient's spatial disturbances. By adding spatial capabilities to the panel of functions tested by the language, it paves the way for eventual remote clinical application.


Asunto(s)
Enfermedad de Huntington , Habla , Humanos , Enfermedad de Huntington/genética , Enfermedad de Huntington/fisiopatología , Enfermedad de Huntington/psicología , Masculino , Femenino , Persona de Mediana Edad , Adulto , Habla/fisiología , Pruebas Neuropsicológicas , Percepción Espacial/fisiología , Anciano
2.
Cortex ; 155: 150-161, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35986957

RESUMEN

Patients with Huntington's disease suffer from disturbances in the perception of emotions; they do not correctly read the body, vocal and facial expressions of others. With regard to the expression of emotions, it has been shown that they are impaired in expressing emotions through face but up until now, little research has been conducted about their ability to express emotions through spoken language. To better understand emotion production in both voice and language in Huntington's Disease (HD), we tested 115 individuals: 68 patients (HD), 22 participants carrying the mutant HD gene without any motor symptoms (pre-manifest HD), and 25 controls in a single-centre prospective observational follow-up study. Participants were recorded in interviews in which they were asked to recall sad, angry, happy, and neutral stories. Emotion expression through voice and language was investigated by comparing the identifiability of emotions expressed by controls, preHD and HD patients in these interviews. To assess separately vocal and linguistic expression of emotions in a blind design, we used machine learning models instead of a human jury performing a forced-choice recognition test. Results from this study showed that patients with HD had difficulty expressing emotions through both voice and language compared to preHD participants and controls, who behaved similarly and above chance. In addition, we did not find any differences in expression of emotions between preHD and healthy controls. We further validated our newly proposed methodology with a human jury on the speech produced by the controls. These results are consistent with the hypothesis that emotional deficits in HD are caused by impaired sensori-motor representations of emotions, in line with embodied cognition theories. This study also shows how machine learning models can be leveraged to assess emotion expression in a blind and reproducible way.


Asunto(s)
Enfermedad de Huntington , Emociones , Expresión Facial , Estudios de Seguimiento , Humanos , Enfermedad de Huntington/psicología , Lenguaje
3.
J Neurol ; 269(9): 5008-5021, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35567614

RESUMEN

OBJECTIVES: Using brief samples of speech recordings, we aimed at predicting, through machine learning, the clinical performance in Huntington's Disease (HD), an inherited Neurodegenerative disease (NDD). METHODS: We collected and analyzed 126 samples of audio recordings of both forward and backward counting from 103 Huntington's disease gene carriers [87 manifest and 16 premanifest; mean age 50.6 (SD 11.2), range (27-88) years] from three multicenter prospective studies in France and Belgium (MIG-HD (ClinicalTrials.gov NCT00190450); BIO-HD (ClinicalTrials.gov NCT00190450) and Repair-HD (ClinicalTrials.gov NCT00190450). We pre-registered all of our methods before running any analyses, in order to avoid inflated results. We automatically extracted 60 speech features from blindly annotated samples. We used machine learning models to combine multiple speech features in order to make predictions at individual levels of the clinical markers. We trained machine learning models on 86% of the samples, the remaining 14% constituted the independent test set. We combined speech features with demographics variables (age, sex, CAG repeats, and burden score) to predict cognitive, motor, and functional scores of the Unified Huntington's disease rating scale. We provided correlation between speech variables and striatal volumes. RESULTS: Speech features combined with demographics allowed the prediction of the individual cognitive, motor, and functional scores with a relative error from 12.7 to 20.0% which is better than predictions using demographics and genetic information. Both mean and standard deviation of pause durations during backward recitation and clinical scores correlated with striatal atrophy (Spearman 0.6 and 0.5-0.6, respectively). INTERPRETATION: Brief and examiner-free speech recording and analysis may become in the future an efficient method for remote evaluation of the individual condition in HD and likely in other NDD.


Asunto(s)
Enfermedad de Huntington , Enfermedades Neurodegenerativas , Cuerpo Estriado , Humanos , Enfermedad de Huntington/diagnóstico , Enfermedad de Huntington/genética , Persona de Mediana Edad , Estudios Prospectivos , Habla
4.
J Acoust Soc Am ; 150(1): 353, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34340514

RESUMEN

Deep learning models have become potential candidates for auditory neuroscience research, thanks to their recent successes in a variety of auditory tasks, yet these models often lack interpretability to fully understand the exact computations that have been performed. Here, we proposed a parametrized neural network layer, which computes specific spectro-temporal modulations based on Gabor filters [learnable spectro-temporal filters (STRFs)] and is fully interpretable. We evaluated this layer on speech activity detection, speaker verification, urban sound classification, and zebra finch call type classification. We found that models based on learnable STRFs are on par for all tasks with state-of-the-art and obtain the best performance for speech activity detection. As this layer remains a Gabor filter, it is fully interpretable. Thus, we used quantitative measures to describe distribution of the learned spectro-temporal modulations. Filters adapted to each task and focused mostly on low temporal and spectral modulations. The analyses show that the filters learned on human speech have similar spectro-temporal parameters as the ones measured directly in the human auditory cortex. Finally, we observed that the tasks organized in a meaningful way: the human vocalization tasks closer to each other and bird vocalizations far away from human vocalizations and urban sounds tasks.


Asunto(s)
Corteza Auditiva , Percepción del Habla , Estimulación Acústica , Percepción Auditiva , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...