Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Gerontologist ; 64(4)2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-37935416

RESUMEN

BACKGROUND AND OBJECTIVES: Social isolation is a risk factor for cognitive decline and dementia. We conducted a randomized controlled clinical trial (RCT) of enhanced social interactions, hypothesizing that conversational interactions can stimulate brain functions among socially isolated older adults without dementia. We report topline results of this multisite RCT (Internet-based conversational engagement clinical trial [I-CONECT]; NCT02871921). RESEARCH DESIGN AND METHODS: The experimental group received cognitively stimulating semistructured conversations with trained interviewers via internet/webcam 4 times per week for 6 months (induction) and twice per week for an additional 6 months (maintenance). The experimental and control groups both received weekly 10 minutes telephone check-ins. Protocol modifications were required due to the coronavirus disease 2019 pandemic. RESULTS: A total of 186 participants were randomized. After the induction period, the experimental group had higher global cognitive test scores (Montreal Cognitive Assessment [primary outcome]; 1.75 points [p = .03]) compared with the control group. After induction, experimental group participants with normal cognition had higher language-based executive function (semantic fluency test [secondary outcome]; 2.56 points [p = .03]). At the end of the maintenance period, the experimental group of mild cognitive impairment subjects had higher encoding function (Craft Story immediate recall test [secondary outcome]; 2.19 points [p = .04]). Measure of emotional well-being improved in both control and experimental groups. Resting-state functional magnetic resonance imaging showed that the experimental group had increased connectivity within the dorsal attention network relative to the control group (p = .02), but the sample size was limited. DISCUSSION AND IMPLICATIONS: Providing frequent stimulating conversational interactions via the internet could be an effective home-based dementia risk-reduction strategy against social isolation and cognitive decline. CLINICAL TRIALS REGISTRATION NUMBER: NCT02871921.


Asunto(s)
Disfunción Cognitiva , Demencia , Humanos , Anciano , Disfunción Cognitiva/psicología , Cognición , Función Ejecutiva
2.
Autism Res ; 15(7): 1288-1300, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35460329

RESUMEN

Variability in expressive and receptive language, difficulty with pragmatic language, and prosodic difficulties are all features of autism spectrum disorder (ASD). Quantifying language and voice characteristics is an important step for measuring outcomes for autistic people, yet clinical measurement is cumbersome and costly. Using natural language processing (NLP) methods and a harmonic model of speech, we analyzed language transcripts and audio recordings to automatically classify individuals as ASD or non-ASD. One-hundred fifty-eight participants (88 ASD, 70 non-ASD) ages 7 to 17 were evaluated with the autism diagnostic observation schedule (ADOS-2), module 3. The ADOS-2 was transcribed following modified SALT guidelines. Seven automated language measures (ALMs) and 10 automated voice measures (AVMs) for each participant were generated from the transcripts and audio of one ADOS-2 task. The measures were analyzed using support vector machine (SVM; a binary classifier) and receiver operating characteristic (ROC). The AVM model resulted in an ROC area under the curve (AUC) of 0.7800, the ALM model an AUC of 0.8748, and the combined model a significantly improved AUC of 0.9205. The ALM model better detected ASD participants who were younger and had lower language skills and shorter activity time. ASD participants detected by the AVM model had better language profiles than those detected by the language model. In combination, automated measurement of language and voice characteristics successfully differentiated children with and without autism. This methodology could help design robust outcome measures for future research. LAY SUMMARY: People with autism often struggle with communication differences which traditional clinical measures and language tests cannot fully capture. Using language transcripts and audio recordings from 158 children ages 7 to 17, we showed that automated, objective language and voice measurements successfully predict the child's diagnosis. This methodology could help design improved outcome measures for research.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Voz , Adolescente , Trastorno del Espectro Autista/diagnóstico , Niño , Humanos , Lenguaje , Habla
3.
Artículo en Inglés | MEDLINE | ID: mdl-37193061

RESUMEN

Transformer-based automatic speech recognition (ASR) systems have shown their success in the presence of large datasets. But, in medical research, we have to create ASR for the non-typical population, i.e. pre-school children with speech disorders, with small training dataset. To increase training efficiency on small datasets, we optimize the architecture of Wav2Vec 2.0, a variation of Transformer, through analyzing its pre-trained model's block-level attention pattern. We show that block-level patterns can serve as an indicator for narrowing down the optimization direction. To ensure the reproducibility of our experiments, we leverage Librispeech-100-clean as training data to simulate the limited data condition. We leverage two techniques, local attention mechanism and cross-block parameter sharing, with counter-intuitive configurations. Our optimized architecture outperforms the vanilla architecture about 1.8% absolute word error rate (WER) on dev-clean and 1.4% on test-clean.

4.
Front Psychol ; 12: 665096, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34557127

RESUMEN

The presence of prosodic anomalies in autistic is recognized by experienced clinicians but their quantitative analysis is a cumbersome task beyond the scope of typical pen and pencil assessment. This paper proposes an automatic approach allowing to tease apart various aspects of prosodic abnormalities and to translate them into fine-grained, automated, and quantifiable measurements. Using a harmonic model (HM) of voiced signal, we isolated the harmonic content of speech and computed a set of quantities related to harmonic content. Employing these measures, along with standard speech measures such as loudness, we successfully trained machine learning models for distinguishing individuals with autism from those with typical development (TD). We evaluated our models empirically on a task of detecting autism on a sample of 118 youth (90 diagnosed with autism and 28 controls; mean age: 10.9 years) and demonstrated that these models perform significantly better than a chance model. Voice and speech analyses could be incorporated as novel outcome measures for treatment research and used for early detection of autism in preverbal infants or toddlers at risk of autism.

5.
Front Psychol ; 12: 668401, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34366987

RESUMEN

Speech and language impairments are common pediatric conditions, with as many as 10% of children experiencing one or both at some point during development. Expressive language disorders in particular often go undiagnosed, underscoring the immediate need for assessments of expressive language that can be administered and scored reliably and objectively. In this paper, we present a set of highly accurate computational models for automatically scoring several common expressive language tasks. In our assessment framework, instructions and stimuli are presented to the child on a tablet computer, which records the child's responses in real time, while a clinician controls the pace and presentation of the tasks using a second tablet. The recorded responses for four distinct expressive language tasks (expressive vocabulary, word structure, recalling sentences, and formulated sentences) are then scored using traditional paper-and-pencil scoring and using machine learning methods relying on a deep neural network-based language representation model. All four tasks can be scored automatically from both clean and verbatim speech transcripts with very high accuracy at the item level (83-99%). In addition, these automated scores correlate strongly and significantly (ρ = 0.76-0.99, p < 0.001) with manual item-level, raw, and scaled scores. These results point to the utility and potential of automated computationally-driven methods of both administering and scoring expressive language tasks for pediatric developmental language evaluation.

6.
Artículo en Inglés | MEDLINE | ID: mdl-37351441

RESUMEN

Building a high quality automatic speech recognition (ASR) system with limited training data has been a challenging task particularly for a narrow target population. Open-sourced ASR systems, trained on sufficient data from adults, are susceptible on seniors' speech due to acoustic mismatch between adults and seniors. With 12 hours of training data, we attempt to develop an ASR system for socially isolated seniors (80+ years old) with possible cognitive impairments. We experimentally identify that ASR for the adult population performs poorly on our target population and transfer learning (TL) can boost the system's performance. Standing on the fundamental idea of TL, tuning model parameters, we further improve the system by leveraging an attention mechanism to utilize the model's intermediate information. Our approach achieves 1.58% absolute improvements over the TL model.

7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 6111-6114, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33019365

RESUMEN

This study describes a fully automated method of expressive language assessment based on vocal responses of children to a sentence repetition task (SRT), a language test that taps into core language skills. Our proposed method automatically transcribes the vocal responses using a test-specific automatic speech recognition system. From the transcriptions, a regression model predicts the gold standard test scores provided by speech-language pathologists. Our preliminary experimental results on audio recordings of 104 children (43 with typical development and 61 with a neurodevelopmental disorder) verifies the feasibility of the proposed automatic method for predicting gold standard scores on this language test, with averaged mean absolute error of 6.52 (on a observed score range from 0 to 90 with a mean value of 49.56) between observed and predicted ratings.Clinical relevance-We describe the use of fully automatic voice-based scoring in language assessment including the clinical impact this development may have on the field of speech-language pathology. The automated test also creates a technological foundation for the computerization of a broad array of tests for voice-based language assessment.


Asunto(s)
Patología del Habla y Lenguaje , Voz , Niño , Humanos , Lenguaje , Desarrollo del Lenguaje , Pruebas del Lenguaje
8.
Curr Alzheimer Res ; 17(7): 658-666, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33032509

RESUMEN

BACKGROUND: Current conventional cognitive assessments are limited in their efficiency and sensitivity, often relying on a single score such as the total correct items. Typically, multiple features of response go uncaptured. OBJECTIVES: We aim to explore a new set of automatically derived features from the Digit Span (DS) task that address some of the drawbacks in the conventional scoring and are also useful for distinguishing subjects with Mild Cognitive Impairment (MCI) from those with intact cognition. METHODS: Audio-recordings of the DS tests administered to 85 subjects (22 MCI and 63 healthy controls, mean age 90.2 years) were transcribed using an Automatic Speech Recognition (ASR) system. Next, five correctness measures were generated from Levenshtein distance analysis of responses: number correct, incorrect, deleted, inserted, and substituted words compared to the test item. These per-item features were aggregated across all test items for both Forward Digit Span (FDS) and Backward Digit Span (BDS) tasks using summary statistical functions, constructing a global feature vector representing the detailed assessment of each subject's response. A support vector machine classifier distinguished MCI from cognitively intact participants. RESULTS: Conventional DS scores did not differentiate MCI participants from controls. The automated multi-feature DS-derived metric achieved 73% on AUC-ROC of the SVM classifier, independent of additional clinical features (77% when combined with demographic features of subjects); well above chance, 50%. CONCLUSION: Our analysis verifies the effectiveness of introduced measures, solely derived from the DS task, in the context of differentiating subjects with MCI from those with intact cognition.


Asunto(s)
Disfunción Cognitiva/diagnóstico , Disfunción Cognitiva/psicología , Diagnóstico por Computador/métodos , Pruebas Neuropsicológicas , Prueba de Estudio Conceptual , Software de Reconocimiento del Habla , Anciano , Anciano de 80 o más Años , Disfunción Cognitiva/fisiopatología , Diagnóstico por Computador/normas , Diagnóstico Diferencial , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas/normas , Software de Reconocimiento del Habla/normas , Grabación en Cinta/métodos , Grabación en Cinta/normas
9.
Proc Conf Assoc Comput Linguist Meet ; 2020: 177-185, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-33060888

RESUMEN

Many clinical assessment instruments used to diagnose language impairments in children include a task in which the subject must formulate a sentence to describe an image using a specific target word. Because producing sentences in this way requires the speaker to integrate syntactic and semantic knowledge in a complex manner, responses are typically evaluated on several different dimensions of appropriateness yielding a single composite score for each response. In this paper, we present a dataset consisting of non-clinically elicited responses for three related sentence formulation tasks, and we propose an approach for automatically evaluating their appropriateness. Using neural machine translation, we generate correct-incorrect sentence pairs to serve as synthetic data in order to increase the amount and diversity of training data for our scoring model. Our scoring model uses transfer learning to facilitate automatic sentence appropriateness evaluation. We further compare custom word embeddings with pre-trained contextualized embeddings serving as features for our scoring model. We find that transfer learning improves scoring accuracy, particularly when using pre-trained contextualized embeddings.

10.
Front Psychol ; 11: 535, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32328008

RESUMEN

Introduction: Clinically relevant information can go uncaptured in the conventional scoring of a verbal fluency test. We hypothesize that characterizing the temporal aspects of the response through a set of time related measures will be useful in distinguishing those with MCI from cognitively intact controls. Methods: Audio recordings of an animal fluency test administered to 70 demographically matched older adults (mean age 90.4 years), 28 with mild cognitive impairment (MCI) and 42 cognitively intact (CI) were professionally transcribed and fed into an automatic speech recognition (ASR) system to estimate the start time of each recalled word in the response. Next, we semantically cluster participant generated animal names and through a novel set of time-based measures, we characterize the semantic search strategy of subjects in retrieving words from animal name clusters. This set of time-based features along with standard count-based features (e.g., number of correctly retrieved animal names) were then used in a machine learning algorithm trained for distinguishing those with MCI from CI controls. Results: The combination of both count-based and time-based features, automatically derived from the test response, achieved 77% on AUC-ROC of the support vector machine (SVM) classifier, outperforming the model trained only on the raw test score (AUC, 65%), and well above the chance model (AUC, 50%). Conclusion: This approach supports the value of introducing time-based measures to the assessment of verbal fluency in the context of this generative task differentiating subjects with MCI from those with intact cognition.

11.
Artículo en Inglés | MEDLINE | ID: mdl-33642674

RESUMEN

Conversation is a complex cognitive task that engages multiple aspects of cognitive functions to remember the discussed topics, monitor the semantic and linguistic elements, and recognize others' emotions. In this paper, we propose a computational method based on the lexical coherence of consecutive utterances to quantify topical variations in semi-structured conversations of older adults with cognitive impairments. Extracting the lexical knowledge of conversational utterances, our method generates a set of novel conversational measures that indicate underlying cognitive deficits among subjects with mild cognitive impairment (MCI). Our preliminary results verify the utility of the proposed conversation-based measures in distinguishing MCI from healthy controls.

12.
J Voice ; 33(5): 721-727, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-29884509

RESUMEN

INTRODUCTION: Adductor spasmodic dysphonia (ADSD) is one of the most disabling voice disorders with no permanent cure. Patients with ADSD suffer from poor voice quality and repeated interruption of phonation that leads to limitations in daily communication. Botox (BT) injection, considered the gold standard treatment for ADSD, reduces the amount of voice breaks and improves voice quality for a limited period. In this study, patients with ADSD were followed after a single BT injection to track the changes in QOL and perceptual voice quality over a 6-month period. METHOD: This is a prospective and longitudinal study. Fifteen patients with ADSD were evaluated preinjection and 1, 3, and 6 months postinjection. They completed the Voice Activity and Participation Profile-Persian Version (VAPPP) and read a passage at each recording period. Perceptual assessment was done by three expert speech-language pathologists with knowledge of ADSD using the grade, roughness, breathiness, asthenia, strain (GRBAS) scale. The data were analyzed using Friedman, Wilcoxon, and McNemar tests. The significance level was set at P < 0.05. RESULTS: The VAPPP total score and each of the domain scores reached their peak scores at 3 months postinjection. At 6 months postinjection, the VAPPP scores increased significantly in comparison with the 3-month scores and but were lower than preinjection scores. GRBAS results also indicated that patients' voices at 1 and 3 months postinjection were significantly less severe in terms of strain and roughness (P = 0.01; P < 0.001, respectively). CONCLUSION: BT injection resulted in improvement of subjects' QOL. The improvement was greatest at 3 months postinjection but remained above the preinjection values at 6 months after injection. The voice quality also improved but was not judged as normal.


Asunto(s)
Inhibidores de la Liberación de Acetilcolina/administración & dosificación , Toxinas Botulínicas/administración & dosificación , Disfonía/tratamiento farmacológico , Fonación/efectos de los fármacos , Calidad de Vida , Pliegues Vocales/efectos de los fármacos , Calidad de la Voz/efectos de los fármacos , Adulto , Anciano , Disfonía/diagnóstico , Disfonía/fisiopatología , Femenino , Humanos , Inyecciones , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Recuperación de la Función , Factores de Tiempo , Resultado del Tratamiento , Pliegues Vocales/fisiopatología
13.
Interspeech ; 2019: 11-15, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33088838

RESUMEN

This study explores building and improving an automatic speech recognition (ASR) system for children aged 6-9 years and diagnosed with autism spectrum disorder (ASD), language impairment (LI), or both. Working with only 1.5 hours of target data in which children perform the Clinical Evaluation of Language Fundamentals Recalling Sentences task, we apply deep neural network (DNN) weight transfer techniques to adapt a large DNN model trained on the LibriSpeech corpus of adult speech. To begin, we aim to find the best proportional training rates of the DNN layers. Our best configuration yields a 29.38% word error rate (WER). Using this configuration, we explore the effects of quantity and similarity of data augmentation in transfer learning. We augment our training with portions of the OGI Kids' Corpus, adding 4.6 hours of typically developing speakers aged kindergarten through 3rd grade. We find that 2nd grade data alone - approximately the mean age of the target data - outperforms other grades and all the sets combined. Doubling the data for 1st, 2nd, and 3rd grade, we again compare each grade as well as pairs of grades. We find the combination of 1st and 2nd grade performs best at a 26.21% WER.

14.
Comput Speech Lang ; 50: 62-84, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29628620

RESUMEN

Computer-Assisted Pronunciation Training (CAPT) systems aim to help a child learn the correct pronunciations of words. However, while there are many online commercial CAPT apps, there is no consensus among Speech Language Therapists (SLPs) or non-professionals about which CAPT systems, if any, work well. The prevailing assumption is that practicing with such programs is less reliable and thus does not provide the feedback necessary to allow children to improve their performance. The most common method for assessing pronunciation performance is the Goodness of Pronunciation (GOP) technique. Our paper proposes two new GOP techniques. We have found that pronunciation models that use explicit knowledge about error pronunciation patterns can lead to more accurate classification whether a phoneme was correctly pronounced or not. We evaluate the proposed pronunciation assessment methods against a baseline state of the art GOP approach, and show that the proposed techniques lead to classification performance that is more similar to that of a human expert.

15.
Alzheimers Dement (N Y) ; 3(2): 219-228, 2017 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29067328

RESUMEN

INTRODUCTION: Trials in Alzheimer's disease are increasingly focusing on prevention in asymptomatic individuals. We hypothesized that indicators of mild cognitive impairment (MCI) may be present in the content of spoken language in older adults and be useful in distinguishing those with MCI from those who are cognitively intact. To test this hypothesis, we performed linguistic analyses of spoken words in participants with MCI and those with intact cognition participating in a clinical trial. METHODS: Data came from a randomized controlled behavioral clinical trial to examine the effect of unstructured conversation on cognitive function among older adults with either normal cognition or MCI (ClinicalTrials.gov: NCT01571427). Unstructured conversations (but with standardized preselected topics across subjects) were recorded between interviewers and interviewees during the intervention sessions of the trial from 14 MCI and 27 cognitively intact participants. From the transcription of interviewees recordings, we grouped spoken words using Linguistic Inquiry and Word Count (LIWC), a structured table of words, which categorizes 2500 words into 68 different word subcategories such as positive and negative words, fillers, and physical states. The number of words in each LIWC word subcategory constructed a vector of 68 dimensions representing the linguistic features of each subject. We used support vector machine and random forest classifiers to distinguish MCI from cognitively intact participants. RESULTS: MCI participants were distinguished from those with intact cognition using linguistic features obtained by LIWC with 84% classification accuracy which is well above chance 60%. DISCUSSION: Linguistic analyses of spoken language may be a powerful tool in distinguishing MCI subjects from those with intact cognition. Further studies to assess whether spoken language derived measures could detect changes in cognitive functions in clinical trials are warrented.

16.
Proc Int Conf Mach Learn Appl ; 2017: 304-308, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33215167

RESUMEN

In this study, we explore the feasibility of speech-based techniques to automatically evaluate a nonword repetition (NWR) test. NWR tests, a useful marker for detecting language impairment, require repetition of pronounceable nonwords, such as "D OY F", presented aurally by an examiner or via a recording. Our proposed method leverages ASR techniques to first transcribe verbal responses. Second, it applies machine learning techniques to ASR output for predicting gold standard scores provided by speech and language pathologists. Our experimental results for a sample of 101 children (42 with autism spectrum disorders, or ASD; 18 with specific language impairment, or SLI; and 41 typically developed, or TD) show that the proposed approach is successful in predicting scores on this test, with averaged product-moment correlations of 0.74 and mean absolute error of 0.06 (on a observed score range from 0.34 to 0.97) between observed and predicted ratings.

17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 570-573, 2016 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28268395

RESUMEN

Automatic detection of falls is important for enabling people who are older to safely live independently longer within their homes. Current automated fall detection systems are typically designed using inertial sensors positioned on the body that generate an alert if there is an abrupt change in motion. These inertial sensors provide no information about the context of the person being monitored and are prone to false positives that can limit their ongoing usage. We describe a fall-detection system consisting of a wearable inertial measurement unit (IMU) and an RF time-of-flight (ToF) transceiver that ranges with other ToF beacons positioned throughout a home. The ToF ranging enables the system to track the position of the person as they move around a home. We describe and show results from three machine learning algorithms that integrate context-related position information with IMU based fall detection to enable a deeper understanding of where falls are occurring and also to improve the specificity of fall detection. The beacons used to localize the falls were able to accurately track to within 0.39 meters of specific waypoints in a simulated home environment. Each of the three algorithms was evaluated with and without the context-based false alarm detection on simulated falls done by 3 volunteer subjects in a simulated home. False positive rates were reduced by 50% when including context.


Asunto(s)
Accidentes por Caídas , Algoritmos , Monitoreo Ambulatorio/métodos , Humanos , Monitoreo Ambulatorio/instrumentación , Monitoreo Ambulatorio/normas , Sensibilidad y Especificidad
18.
Text Speech Dialog ; 9924: 470-477, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33244525

RESUMEN

In this paper, we propose an automatic scoring approach for assessing the language deficit in a sentence repetition task used to evaluate children with language disorders. From ASR-transcribed sentences, we extract sentence similarity measures, including WER and Levenshtein distance, and use them as the input features in a regression model to predict the reference scores manually rated by experts. Our experimental analysis on subject-level scores of 46 children, 33 diagnosed with autism spectrum disorders (ASD), and 13 with specific language impairment (SLI) show that proposed approach is successful in prediction of scores with averaged product-moment correlations of 0.84 between observed and predicted ratings across test folds.

19.
Curr Alzheimer Res ; 12(6): 513-9, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26027814

RESUMEN

BACKGROUND: Detecting early signs of Alzheimer's disease (AD) and mild cognitive impairment (MCI) during the pre-symptomatic phase is becoming increasingly important for costeffective clinical trials and also for deriving maximum benefit from currently available treatment strategies. However, distinguishing early signs of MCI from normal cognitive aging is difficult. Biomarkers have been extensively examined as early indicators of the pathological process for AD, but assessing these biomarkers is expensive and challenging to apply widely among pre-symptomatic community dwelling older adults. Here we propose assessment of social markers, which could provide an alternative or complementary and ecologically valid strategy for identifying the pre-symptomatic phase leading to MCI and AD. METHODS: The data came from a larger randomized controlled clinical trial (RCT), where we examined whether daily conversational interactions using remote video telecommunications software could improve cognitive functions of older adult participants. We assessed the proportion of words generated by participants out of total words produced by both participants and staff interviewers using transcribed conversations during the intervention trial as an indicator of how two people (participants and interviewers) interact with each other in one-on-one conversations. We examined whether the proportion differed between those with intact cognition and MCI, using first, generalized estimating equations with the proportion as outcome, and second, logistic regression models with cognitive status as outcome in order to estimate the area under ROC curve (ROC AUC). RESULTS: Compared to those with normal cognitive function, MCI participants generated a greater proportion of words out of the total number of words during the timed conversation sessions (p=0.01). This difference remained after controlling for participant age, gender, interviewer and time of assessment (p=0.03). The logistic regression models showed the ROC AUC of identifying MCI (vs. normals) was 0.71 (95% Confidence Interval: 0.54 - 0.89) when average proportion of word counts spoken by subjects was included univariately into the model. CONCLUSION: An ecologically valid social marker such as the proportion of spoken words produced during spontaneous conversations may be sensitive to transitions from normal cognition to MCI.


Asunto(s)
Disfunción Cognitiva/psicología , Disfunción Cognitiva/rehabilitación , Entrevista Psicológica/métodos , Conducta Social , Habla/fisiología , Anciano , Anciano de 80 o más Años , Enfermedad de Alzheimer/psicología , Enfermedad de Alzheimer/rehabilitación , Enfermedades Asintomáticas/rehabilitación , Biomarcadores , Progresión de la Enfermedad , Femenino , Humanos , Modelos Logísticos , Masculino , Pruebas Neuropsicológicas
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 5573-6, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26737555

RESUMEN

Phonological disorders affect 10% of preschool and school-age children, adversely affecting their communication, academic performance, and interaction level. Effective pronunciation training requires prolonged supervised practice and interaction. Unfortunately, many children do not have access or only limited access to a speech-language pathologist. Computer-assisted pronunciation training has the potential for being a highly effective teaching aid; however, to-date such systems remain incapable of identifying pronunciation errors with sufficient accuracy. In this paper, we propose to improve accuracy by (1) learning acoustic models from a large children's speech database, (2) using an explicit model of typical pronunciation errors of children in the target age range, and (3) explicit modeling of the acoustics of distorted phonemes.


Asunto(s)
Trastorno Fonológico , Niño , Humanos , Fonética , Habla , Medición de la Producción del Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA