Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
J Geriatr Psychiatry Neurol ; 34(5): 357-369, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-32723128

RESUMEN

Neurodegenerative conditions like Alzheimer disease affect millions and have no known cure, making early detection important. In addition to memory impairments, dementia causes substantial changes in speech production, particularly lexical-semantic characteristics. Existing clinical tools for detecting change often require considerable expertise or time, and efficient methods for identifying persons at risk are needed. This study examined whether early stages of cognitive decline can be identified using an automated calculation of lexical-semantic features of participants' spontaneous speech. Unimpaired or mildly impaired older adults (N = 39, mean 81 years old) produced several monologues (picture descriptions and expository descriptions) and completed a neuropsychological battery, including the Modified Mini-Mental State Exam. Most participants (N = 30) returned one year later for follow-up. Lexical-semantic features of participants' speech (particularly lexical frequency) were significantly correlated with cognitive status at the same visit and also with cognitive status one year in the future. Thus, automated analysis of speech production is closely associated with current and future cognitive test performance and could provide a novel, scalable method for longitudinal tracking of cognitive health.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Anciano , Anciano de 80 o más Años , Cognición , Disfunción Cognitiva/diagnóstico , Humanos , Pruebas Neuropsicológicas , Habla
2.
J Med Internet Res ; 23(2): e21037, 2021 02 22.
Artículo en Inglés | MEDLINE | ID: mdl-33616535

RESUMEN

BACKGROUND: Facial expressions require the complex coordination of 43 different facial muscles. Parkinson disease (PD) affects facial musculature leading to "hypomimia" or "masked facies." OBJECTIVE: We aimed to determine whether modern computer vision techniques can be applied to detect masked facies and quantify drug states in PD. METHODS: We trained a convolutional neural network on images extracted from videos of 107 self-identified people with PD, along with 1595 videos of controls, in order to detect PD hypomimia cues. This trained model was applied to clinical interviews of 35 PD patients in their on and off drug motor states, and seven journalist interviews of the actor Alan Alda obtained before and after he was diagnosed with PD. RESULTS: The algorithm achieved a test set area under the receiver operating characteristic curve of 0.71 on 54 subjects to detect PD hypomimia, compared to a value of 0.75 for trained neurologists using the United Parkinson Disease Rating Scale-III Facial Expression score. Additionally, the model accuracy to classify the on and off drug states in the clinical samples was 63% (22/35), in contrast to an accuracy of 46% (16/35) when using clinical rater scores. Finally, each of Alan Alda's seven interviews were successfully classified as occurring before (versus after) his diagnosis, with 100% accuracy (7/7). CONCLUSIONS: This proof-of-principle pilot study demonstrated that computer vision holds promise as a valuable tool for PD hypomimia and for monitoring a patient's motor state in an objective and noninvasive way, particularly given the increasing importance of telemedicine.


Asunto(s)
Enfermedad de Parkinson/complicaciones , Visión Ocular/fisiología , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Computadores , Femenino , Humanos , Masculino , Persona de Mediana Edad , Examen Neurológico , Enfermedad de Parkinson/fisiopatología , Proyectos Piloto
3.
JMIR Aging ; 6: e46483, 2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37819025

RESUMEN

Background: Speech analysis data are promising digital biomarkers for the early detection of Alzheimer disease. However, despite its importance, very few studies in this area have examined whether older adults produce spontaneous speech with characteristics that are sufficiently consistent to be used as proxy markers of cognitive status. Objective: This preliminary study seeks to investigate consistency across lexical characteristics of speech in older adults with and without cognitive impairment. Methods: A total of 39 older adults from a larger, ongoing study (age: mean 81.1, SD 5.9 years) were included. Participants completed neuropsychological testing and both picture description tasks and expository tasks to elicit speech. Participants with T-scores of ≤40 on ≥2 cognitive tests were categorized as having mild cognitive impairment (MCI). Speech features were computed automatically by using Python and the Natural Language Toolkit. Results: Reliability indices based on mean correlations for picture description tasks and expository tasks were similar in persons with and without MCI (with r ranging from 0.49 to 0.65 within tasks). Intraindividual variability was generally preserved across lexical speech features. Speech rate and filler rate were the most consistent indices for the cognitively intact group, and speech rate was the most consistent for the MCI group. Conclusions: Our findings suggest that automatically calculated lexical properties of speech are consistent in older adults with varying levels of cognitive impairment. These findings encourage further investigation of the utility of speech analysis and other digital biomarkers for monitoring cognitive status over time.

4.
Atten Percept Psychophys ; 85(4): 1219-1237, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37155085

RESUMEN

The McGurk effect is an illusion in which visible articulations alter the perception of auditory speech (e.g., video 'da' dubbed with audio 'ba' may be heard as 'da'). To test the timing of the multisensory processes that underlie the McGurk effect, Ostrand et al. Cognition 151, 96-107, 2016 used incongruent stimuli, such as auditory 'bait' + visual 'date' as primes in a lexical decision task. These authors reported that the auditory word, but not the perceived (visual) word, induced semantic priming, suggesting that the auditory signal alone can provide the input for lexical access, before multisensory integration is complete. Here, we conceptually replicate the design of Ostrand et al. (2016), using different stimuli chosen to optimize the success of the McGurk illusion. In contrast to the results of Ostrand et al. (2016), we find that the perceived (i.e., visual) word of the incongruent stimulus usually induced semantic priming. We further find that the strength of this priming corresponded to the magnitude of the McGurk effect for each word combination. These findings suggest, in contrast to the findings of Ostrand et al. (2016), that lexical access makes use of integrated multisensory information which is perceived by the listener. These findings further suggest that which unimodal signal of a multisensory stimulus is used in lexical access is dependent on the perception of that stimulus.


Asunto(s)
Ilusiones , Percepción del Habla , Humanos , Percepción Auditiva , Semántica , Percepción Visual
5.
PLoS One ; 17(6): e0269242, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35704594

RESUMEN

A central question in understanding human language is how people store, access, and comprehend words. The ongoing COVID-19 pandemic presented a natural experiment to investigate whether language comprehension can be changed in a lasting way by external experiences. We leveraged the sudden increase in the frequency of certain words (mask, isolation, lockdown) to investigate the effects of rapid contextual changes on word comprehension, measured over 10 months within the first year of the pandemic. Using the phonemic restoration paradigm, in which listeners are presented with ambiguous auditory input and report which word they hear, we conducted four online experiments with adult participants across the United States (combined N = 899). We find that the pandemic has reshaped language processing for the long term, changing how listeners process speech and what they expect from ambiguous input. These results show that abrupt changes in linguistic exposure can cause enduring changes to the language system.


Asunto(s)
COVID-19 , Percepción del Habla , Adulto , COVID-19/epidemiología , Control de Enfermedades Transmisibles , Comprensión , Humanos , Lenguaje , Pandemias
6.
Appl Neuropsychol Adult ; 29(5): 1250-1257, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-33377800

RESUMEN

The population of older adults is growing dramatically and, with it comes increased prevalence of neurological disorders, including Alzheimer's disease (AD). Though existing cognitive screening tests can aid early detection of cognitive decline, these methods are limited in their sensitivity and require trained administrators. The current study sought to determine whether it is possible to identify persons with mild cognitive impairment (MCI) using automated analysis of spontaneous speech. Participants completed a brief neuropsychological test battery and a spontaneous speech task. MCI was classified using established research criteria, and lexical-semantic features were calculated from spontaneous speech. Logistic regression analyses compared the predictive ability of a commonly-used cognitive screening instrument (the Modified Mini Mental Status Exam, 3MS) and speech indices for MCI classification. Testing against constant-only logistic regression models showed that both the 3MS [χ2(1) = 6.18, p = .013; AIC = 41.46] and speech indices [χ2(16) = 32.42, p = .009; AIC = 108.41] were able to predict MCI status. Follow-up testing revealed the full speech model better predicted MCI status than did 3MS (p = .049). In combination, the current findings suggest that spontaneous speech may have value as a potential screening measure for the identification of cognitive deficits, though confirmation is needed in larger, prospective studies.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Anciano , Enfermedad de Alzheimer/diagnóstico , Enfermedad de Alzheimer/psicología , Disfunción Cognitiva/diagnóstico , Disfunción Cognitiva/psicología , Humanos , Pruebas Neuropsicológicas , Estudios Prospectivos , Habla
7.
J Phon ; 882021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34366499

RESUMEN

During conversation, speakers modulate characteristics of their production to match their interlocutors' characteristics. This behavior is known as alignment. Speakers align at many linguistic levels, including the syntactic, lexical, and phonetic levels. As a result, alignment is often treated as a unitary phenomenon, in which evidence of alignment on one feature is cast as alignment of the entire linguistic level. This experiment investigates whether alignment can occur at some levels but not others, and on some features but not others, within a given dialogue. Participants interacted with two experimenters with highly contrasting acoustic-phonetic and syntactic profiles. The experimenters each described sets of pictures using a consistent acoustic-phonetic and syntactic profile; the participants then described new pictures to each experimenter individually. Alignment was measured as the degree to which subjects matched their current listener's speech (vs. their non-listener's) on each of several individual acoustic-phonetic and syntactic features. Additionally, a holistic measure of phonetic alignment was assessed using 323 acoustic-phonetic features analyzed jointly in a machine learning classifier. Although participants did not align on several individual spectral-phonetic or syntactic features, they did align on individual temporal-phonetic features and as measured by the holistic acoustic-phonetic profile. Thus, alignment can simultaneously occur at some levels but not others within a given dialogue, and is not a single phenomenon but rather a constellation of loosely-related effects. These findings suggest that the mechanism underlying alignment is not a primitive, automatic priming mechanism but rather guided by communicative or social factors.

8.
Neuropsychopharmacology ; 45(5): 823-832, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31978933

RESUMEN

The detection of changes in mental states such as those caused by psychoactive drugs relies on clinical assessments that are inherently subjective. Automated speech analysis may represent a novel method to detect objective markers, which could help improve the characterization of these mental states. In this study, we employed computer-extracted speech features from multiple domains (acoustic, semantic, and psycholinguistic) to assess mental states after controlled administration of 3,4-methylenedioxymethamphetamine (MDMA) and intranasal oxytocin. The training/validation set comprised within-participants data from 31 healthy adults who, over four sessions, were administered MDMA (0.75, 1.5 mg/kg), oxytocin (20 IU), and placebo in randomized, double-blind fashion. Participants completed two 5-min speech tasks during peak drug effects. Analyses included group-level comparisons of drug conditions and estimation of classification at the individual level within this dataset and on two independent datasets. Promising classification results were obtained to detect drug conditions, achieving cross-validated accuracies of up to 87% in training/validation and 92% in the independent datasets, suggesting that the detected patterns of speech variability are associated with drug consumption. Specifically, we found that oxytocin seems to be mostly driven by changes in emotion and prosody, which are mainly captured by acoustic features. In contrast, mental states driven by MDMA consumption appear to manifest in multiple domains of speech. Furthermore, we find that the experimental task has an effect on the speech response within these mental states, which can be attributed to presence or absence of an interaction with another individual. These results represent a proof-of-concept application of the potential of speech to provide an objective measurement of mental states elicited during intoxication.


Asunto(s)
Lenguaje , N-Metil-3,4-metilenodioxianfetamina/administración & dosificación , Pruebas Neuropsicológicas , Psicotrópicos/administración & dosificación , Habla/efectos de los fármacos , Administración Intranasal , Adulto , Método Doble Ciego , Femenino , Humanos , Masculino , Oxitocina/administración & dosificación , Psicolingüística , Semántica , Adulto Joven
9.
J Mem Lang ; 1082019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31379406

RESUMEN

Conversational partners match each other's speech, a process known as alignment. Such alignment can be partner-specific, when speakers match particular partners' production distributions, or partner-independent, when speakers match aggregated linguistic statistics across their input. However, partner-specificity has only been assessed in situations where it had clear communicative utility, and non-alignment might cause communicative difficulty. Here, we investigate whether speakers align partner-specifically even without a communicative need, and thus whether the mechanism driving alignment is sensitive to communicative and social factors of the linguistic context. In five experiments, participants interacted with two experimenters, each with unique and systematic syntactic preferences (e.g., Experimenter A only produced double object datives and Experimenter B only produced prepositional datives). Across multiple exposure conditions, participants engaged in partner-independent but not partner-specific alignment. Thus, when partner-specificity does not add communicative utility, speakers align to aggregate, partner-independent statistical distributions, supporting a communicatively-modulated mechanism underlying alignment.

10.
J Mem Lang ; 107: 216-232, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31942088

RESUMEN

Syntactic structures can convey certain (subtle) emergent properties of events. For example, the double-object dative ("the doctor is giving a patient pills") can convey the successful transfer of possession, whereas its syntactic alternative, the prepositional dative ("the doctor is giving pills to a patient"), conveys just a transfer to a location. Four experiments explore how syntactic structures may become associated with particular semantic content - such as these emergent properties of events. Experiment 1 provides evidence that speakers form associations between syntactic structures and particular event depictions. Experiment 2 shows that these associations also hold for different depictions of the same events. Experiments 3 and 4 implicate representations of the semantic features of events in these associations. Taken together, these results reveal an effect we term syntactic entrainment that is well positioned to reflect the recalibration of the strength of the mappings or associations that allow syntactic structures to convey emergent properties of events.

11.
Cognition ; 151: 96-107, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27011021

RESUMEN

Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another.


Asunto(s)
Estimulación Acústica/métodos , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Percepción Auditiva/fisiología , Femenino , Humanos , Masculino , Proyectos Piloto , Distribución Aleatoria , Adulto Joven
12.
Cogsci ; 33: 1376-1381, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-39399657

RESUMEN

In the McGurk Effect, a visual stimulus can affect the perception of an auditory signal, suggesting integration of the auditory and visual streams. However, it is unclear when in speech processing this auditory-visual integration occurs. The present study used a semantic priming paradigm to investigate whether integration occurs before, during, or after access of the lexical-semantic network. Semantic associates of the un-integrated auditory signal were activated when the auditory stream was a word, while semantic associates of the integrated McGurk percept (a real word) were activated when the auditory signal was a nonword. These results suggest that the temporal relationship between lexical access and integration depends on the lexicality of the auditory stream.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA