Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 8.222
Filtrar
1.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38715409

RESUMEN

Behavioral and brain-related changes in word production have been claimed to predominantly occur after 70 years of age. Most studies investigating age-related changes in adulthood only compared young to older adults, failing to determine whether neural processes underlying word production change at an earlier age than observed in behavior. This study aims to fill this gap by investigating whether changes in neurophysiological processes underlying word production are aligned with behavioral changes. Behavior and the electrophysiological event-related potential patterns of word production were assessed during a picture naming task in 95 participants across five adult lifespan age groups (ranging from 16 to 80 years old). While behavioral performance decreased starting from 70 years of age, significant neurophysiological changes were present at the age of 40 years old, in a time window (between 150 and 220 ms) likely associated with lexical-semantic processes underlying referential word production. These results show that neurophysiological modifications precede the behavioral changes in language production; they can be interpreted in line with the suggestion that the lexical-semantic reorganization in mid-adulthood influences the maintenance of language skills longer than for other cognitive functions.


Asunto(s)
Envejecimiento , Electroencefalografía , Potenciales Evocados , Humanos , Adulto , Anciano , Masculino , Persona de Mediana Edad , Femenino , Adulto Joven , Adolescente , Anciano de 80 o más Años , Envejecimiento/fisiología , Potenciales Evocados/fisiología , Encéfalo/fisiología , Habla/fisiología , Semántica
2.
PLoS One ; 19(5): e0302739, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38728329

RESUMEN

BACKGROUND: Deep brain stimulation (DBS) reliably ameliorates cardinal motor symptoms in Parkinson's disease (PD) and essential tremor (ET). However, the effects of DBS on speech, voice and language have been inconsistent and have not been examined comprehensively in a single study. OBJECTIVE: We conducted a systematic analysis of literature by reviewing studies that examined the effects of DBS on speech, voice and language in PD and ET. METHODS: A total of 675 publications were retrieved from PubMed, Embase, CINHAL, Web of Science, Cochrane Library and Scopus databases. Based on our selection criteria, 90 papers were included in our analysis. The selected publications were categorized into four subcategories: Fluency, Word production, Articulation and phonology and Voice quality. RESULTS: The results suggested a long-term decline in verbal fluency, with more studies reporting deficits in phonemic fluency than semantic fluency following DBS. Additionally, high frequency stimulation, left-sided and bilateral DBS were associated with worse verbal fluency outcomes. Naming improved in the short-term following DBS-ON compared to DBS-OFF, with no long-term differences between the two conditions. Bilateral and low-frequency DBS demonstrated a relative improvement for phonation and articulation. Nonetheless, long-term DBS exacerbated phonation and articulation deficits. The effect of DBS on voice was highly variable, with both improvements and deterioration in different measures of voice. CONCLUSION: This was the first study that aimed to combine the outcome of speech, voice, and language following DBS in a single systematic review. The findings revealed a heterogeneous pattern of results for speech, voice, and language across DBS studies, and provided directions for future studies.


Asunto(s)
Estimulación Encefálica Profunda , Lenguaje , Enfermedad de Parkinson , Habla , Voz , Estimulación Encefálica Profunda/métodos , Humanos , Enfermedad de Parkinson/terapia , Enfermedad de Parkinson/fisiopatología , Habla/fisiología , Voz/fisiología , Temblor Esencial/terapia , Temblor Esencial/fisiopatología
3.
Cogn Res Princ Implic ; 9(1): 29, 2024 05 12.
Artículo en Inglés | MEDLINE | ID: mdl-38735013

RESUMEN

Auditory stimuli that are relevant to a listener have the potential to capture focal attention even when unattended, the listener's own name being a particularly effective stimulus. We report two experiments to test the attention-capturing potential of the listener's own name in normal speech and time-compressed speech. In Experiment 1, 39 participants were tested with a visual word categorization task with uncompressed spoken names as background auditory distractors. Participants' word categorization performance was slower when hearing their own name rather than other names, and in a final test, they were faster at detecting their own name than other names. Experiment 2 used the same task paradigm, but the auditory distractors were time-compressed names. Three compression levels were tested with 25 participants in each condition. Participants' word categorization performance was again slower when hearing their own name than when hearing other names; the slowing was strongest with slight compression and weakest with intense compression. Personally relevant time-compressed speech has the potential to capture attention, but the degree of capture depends on the level of compression. Attention capture by time-compressed speech has practical significance and provides partial evidence for the duplex-mechanism account of auditory distraction.


Asunto(s)
Atención , Nombres , Percepción del Habla , Humanos , Atención/fisiología , Femenino , Masculino , Percepción del Habla/fisiología , Adulto , Adulto Joven , Habla/fisiología , Tiempo de Reacción/fisiología , Estimulación Acústica
4.
Neuroimage ; 293: 120629, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38697588

RESUMEN

Covert speech (CS) refers to speaking internally to oneself without producing any sound or movement. CS is involved in multiple cognitive functions and disorders. Reconstructing CS content by brain-computer interface (BCI) is also an emerging technique. However, it is still controversial whether CS is a truncated neural process of overt speech (OS) or involves independent patterns. Here, we performed a word-speaking experiment with simultaneous EEG-fMRI. It involved 32 participants, who generated words both overtly and covertly. By integrating spatial constraints from fMRI into EEG source localization, we precisely estimated the spatiotemporal dynamics of neural activity. During CS, EEG source activity was localized in three regions: the left precentral gyrus, the left supplementary motor area, and the left putamen. Although OS involved more brain regions with stronger activations, CS was characterized by an earlier event-locked activation in the left putamen (peak at 262 ms versus 1170 ms). The left putamen was also identified as the only hub node within the functional connectivity (FC) networks of both OS and CS, while showing weaker FC strength towards speech-related regions in the dominant hemisphere during CS. Path analysis revealed significant multivariate associations, indicating an indirect association between the earlier activation in the left putamen and CS, which was mediated by reduced FC towards speech-related regions. These findings revealed the specific spatiotemporal dynamics of CS, offering insights into CS mechanisms that are potentially relevant for future treatment of self-regulation deficits, speech disorders, and development of BCI speech applications.


Asunto(s)
Electroencefalografía , Imagen por Resonancia Magnética , Habla , Humanos , Masculino , Imagen por Resonancia Magnética/métodos , Femenino , Habla/fisiología , Adulto , Electroencefalografía/métodos , Adulto Joven , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos
5.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38741267

RESUMEN

The role of the left temporoparietal cortex in speech production has been extensively studied during native language processing, proving crucial in controlled lexico-semantic retrieval under varying cognitive demands. Yet, its role in bilinguals, fluent in both native and second languages, remains poorly understood. Here, we employed continuous theta burst stimulation to disrupt neural activity in the left posterior middle-temporal gyrus (pMTG) and angular gyrus (AG) while Italian-Friulian bilinguals performed a cued picture-naming task. The task involved between-language (naming objects in Italian or Friulian) and within-language blocks (naming objects ["knife"] or associated actions ["cut"] in a single language) in which participants could either maintain (non-switch) or change (switch) instructions based on cues. During within-language blocks, cTBS over the pMTG entailed faster naming for high-demanding switch trials, while cTBS to the AG elicited slower latencies in low-demanding non-switch trials. No cTBS effects were observed in the between-language block. Our findings suggest a causal involvement of the left pMTG and AG in lexico-semantic processing across languages, with distinct contributions to controlled vs. "automatic" retrieval, respectively. However, they do not support the existence of shared control mechanisms within and between language(s) production. Altogether, these results inform neurobiological models of semantic control in bilinguals.


Asunto(s)
Multilingüismo , Lóbulo Parietal , Habla , Lóbulo Temporal , Estimulación Magnética Transcraneal , Humanos , Masculino , Lóbulo Temporal/fisiología , Femenino , Adulto Joven , Adulto , Lóbulo Parietal/fisiología , Habla/fisiología , Señales (Psicología)
6.
Nat Commun ; 15(1): 3692, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38693186

RESUMEN

Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.


Asunto(s)
Atención , Movimientos Oculares , Magnetoencefalografía , Percepción del Habla , Habla , Humanos , Atención/fisiología , Movimientos Oculares/fisiología , Masculino , Femenino , Adulto , Adulto Joven , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica , Encéfalo/fisiología , Tecnología de Seguimiento Ocular
7.
Sci Rep ; 14(1): 11491, 2024 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-38769115

RESUMEN

Several attempts for speech brain-computer interfacing (BCI) have been made to decode phonemes, sub-words, words, or sentences using invasive measurements, such as the electrocorticogram (ECoG), during auditory speech perception, overt speech, or imagined (covert) speech. Decoding sentences from covert speech is a challenging task. Sixteen epilepsy patients with intracranially implanted electrodes participated in this study, and ECoGs were recorded during overt speech and covert speech of eight Japanese sentences, each consisting of three tokens. In particular, Transformer neural network model was applied to decode text sentences from covert speech, which was trained using ECoGs obtained during overt speech. We first examined the proposed Transformer model using the same task for training and testing, and then evaluated the model's performance when trained with overt task for decoding covert speech. The Transformer model trained on covert speech achieved an average token error rate (TER) of 46.6% for decoding covert speech, whereas the model trained on overt speech achieved a TER of 46.3% ( p > 0.05 ; d = 0.07 ) . Therefore, the challenge of collecting training data for covert speech can be addressed using overt speech. The performance of covert speech can improve by employing several overt speeches.


Asunto(s)
Interfaces Cerebro-Computador , Electrocorticografía , Habla , Humanos , Femenino , Masculino , Adulto , Habla/fisiología , Percepción del Habla/fisiología , Adulto Joven , Estudios de Factibilidad , Epilepsia/fisiopatología , Redes Neurales de la Computación , Persona de Mediana Edad , Adolescente
8.
Curr Biol ; 34(9): R348-R351, 2024 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-38714162

RESUMEN

A recent study has used scalp-recorded electroencephalography to obtain evidence of semantic processing of human speech and objects by domesticated dogs. The results suggest that dogs do comprehend the meaning of familiar spoken words, in that a word can evoke the mental representation of the object to which it refers.


Asunto(s)
Cognición , Semántica , Animales , Perros/psicología , Cognición/fisiología , Humanos , Electroencefalografía , Habla/fisiología , Percepción del Habla/fisiología , Comprensión/fisiología
9.
Sci Adv ; 10(20): eadm9797, 2024 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-38748798

RESUMEN

Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a "musi-linguistic" continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.


Asunto(s)
Lenguaje , Música , Habla , Humanos , Habla/fisiología , Masculino , Percepción de la Altura Tonal/fisiología , Femenino , Adulto , Publicación de Preinscripción
10.
J Acoust Soc Am ; 155(5): 3206-3212, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38738937

RESUMEN

Modern humans and chimpanzees share a common ancestor on the phylogenetic tree, yet chimpanzees do not spontaneously produce speech or speech sounds. The lab exercise presented in this paper was developed for undergraduate students in a course entitled "What's Special About Human Speech?" The exercise is based on acoustic analyses of the words "cup" and "papa" as spoken by Viki, a home-raised, speech-trained chimpanzee, as well as the words spoken by a human. The analyses allow students to relate differences in articulation and vocal abilities between Viki and humans to the known anatomical differences in their vocal systems. Anatomical and articulation differences between humans and Viki include (1) potential tongue movements, (2) presence or absence of laryngeal air sacs, (3) presence or absence of vocal membranes, and (4) exhalation vs inhalation during production.


Asunto(s)
Pan troglodytes , Acústica del Lenguaje , Habla , Humanos , Animales , Pan troglodytes/fisiología , Habla/fisiología , Lengua/fisiología , Lengua/anatomía & histología , Vocalización Animal/fisiología , Especificidad de la Especie , Medición de la Producción del Habla , Laringe/fisiología , Laringe/anatomía & histología , Fonética
11.
Proc Natl Acad Sci U S A ; 121(22): e2316149121, 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38768342

RESUMEN

Speech impediments are a prominent yet understudied symptom of Parkinson's disease (PD). While the subthalamic nucleus (STN) is an established clinical target for treating motor symptoms, these interventions can lead to further worsening of speech. The interplay between dopaminergic medication, STN circuitry, and their downstream effects on speech in PD is not yet fully understood. Here, we investigate the effect of dopaminergic medication on STN circuitry and probe its association with speech and cognitive functions in PD patients. We found that changes in intrinsic functional connectivity of the STN were associated with alterations in speech functions in PD. Interestingly, this relationship was characterized by altered functional connectivity of the dorsolateral and ventromedial subdivisions of the STN with the language network. Crucially, medication-induced changes in functional connectivity between the STN's dorsolateral subdivision and key regions in the language network, including the left inferior frontal cortex and the left superior temporal gyrus, correlated with alterations on a standardized neuropsychological test requiring oral responses. This relation was not observed in the written version of the same test. Furthermore, changes in functional connectivity between STN and language regions predicted the medication's downstream effects on speech-related cognitive performance. These findings reveal a previously unidentified brain mechanism through which dopaminergic medication influences speech function in PD. Our study sheds light into the subcortical-cortical circuit mechanisms underlying impaired speech control in PD. The insights gained here could inform treatment strategies aimed at mitigating speech deficits in PD and enhancing the quality of life for affected individuals.


Asunto(s)
Lenguaje , Enfermedad de Parkinson , Habla , Núcleo Subtalámico , Humanos , Enfermedad de Parkinson/fisiopatología , Enfermedad de Parkinson/tratamiento farmacológico , Núcleo Subtalámico/fisiopatología , Núcleo Subtalámico/efectos de los fármacos , Masculino , Habla/fisiología , Habla/efectos de los fármacos , Femenino , Persona de Mediana Edad , Anciano , Imagen por Resonancia Magnética , Dopamina/metabolismo , Red Nerviosa/efectos de los fármacos , Red Nerviosa/fisiopatología , Cognición/efectos de los fármacos , Dopaminérgicos/farmacología , Dopaminérgicos/uso terapéutico
12.
J Neural Eng ; 21(3)2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38648783

RESUMEN

Objective. Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to characterize the amount of thalamic neurons necessary for high accuracy decoding.Approach. We intraoperatively recorded single neuron activity in the left Vim of eight neurosurgical patients undergoing implantation of deep brain stimulator or RF lesioning during production, perception and imagery of the five monophthongal vowel sounds. We utilized the Spade decoder, a machine learning algorithm that dynamically learns specific features of firing patterns and is based on sparse decomposition of the high dimensional feature space.Main results. Spade outperformed all algorithms compared with, for all three aspects of speech: production, perception and imagery, and obtained accuracies of 100%, 96%, and 92%, respectively (chance level: 20%) based on pooling together neurons across all patients. The accuracy was logarithmic in the amount of neurons for all three aspects of speech. Regardless of the amount of units employed, production gained highest accuracies, whereas perception and imagery equated with each other.Significance. Our research renders single neuron activity in the left Vim a promising source of inputs to BMIs for restoration of speech faculties for locked-in patients or patients with anarthria or dysarthria to allow them to communicate again. Our characterization of how many neurons are necessary to achieve a certain decoding accuracy is of utmost importance for planning BMI implantation.


Asunto(s)
Interfaces Cerebro-Computador , Aprendizaje Automático , Neuronas , Habla , Tálamo , Humanos , Neuronas/fisiología , Masculino , Femenino , Persona de Mediana Edad , Habla/fisiología , Adulto , Tálamo/fisiología , Estimulación Encefálica Profunda/métodos , Anciano , Percepción del Habla/fisiología
13.
Curr Biol ; 34(8): 1731-1738.e3, 2024 04 22.
Artículo en Inglés | MEDLINE | ID: mdl-38593800

RESUMEN

In face-to-face interactions with infants, human adults exhibit a species-specific communicative signal. Adults present a distinctive "social ensemble": they use infant-directed speech (parentese), respond contingently to infants' actions and vocalizations, and react positively through mutual eye-gaze and smiling. Studies suggest that this social ensemble is essential for initial language learning. Our hypothesis is that the social ensemble attracts attentional systems to speech and that sensorimotor systems prepare infants to respond vocally, both of which advance language learning. Using infant magnetoencephalography (MEG), we measure 5-month-old infants' neural responses during live verbal face-to-face (F2F) interaction with an adult (social condition) and during a control (nonsocial condition) in which the adult turns away from the infant to speak to another person. Using a longitudinal design, we tested whether infants' brain responses to these conditions at 5 months of age predicted their language growth at five future time points. Brain areas involved in attention (right hemisphere inferior frontal, right hemisphere superior temporal, and right hemisphere inferior parietal) show significantly higher theta activity in the social versus nonsocial condition. Critical to theory, we found that infants' neural activity in response to F2F interaction in attentional and sensorimotor regions significantly predicted future language development into the third year of life, more than 2 years after the initial measurements. We develop a view of early language acquisition that underscores the centrality of the social ensemble, and we offer new insight into the neurobiological components that link infants' language learning to their early brain functioning during social interaction.


Asunto(s)
Encéfalo , Desarrollo del Lenguaje , Magnetoencefalografía , Interacción Social , Humanos , Lactante , Masculino , Femenino , Encéfalo/fisiología , Atención/fisiología , Habla/fisiología
14.
Cogn Res Princ Implic ; 9(1): 25, 2024 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-38652383

RESUMEN

The use of face coverings can make communication more difficult by removing access to visual cues as well as affecting the physical transmission of speech sounds. This study aimed to assess the independent and combined contributions of visual and auditory cues to impaired communication when using face coverings. In an online task, 150 participants rated videos of natural conversation along three dimensions: (1) how much they could follow, (2) how much effort was required, and (3) the clarity of the speech. Visual and audio variables were independently manipulated in each video, so that the same video could be presented with or without a superimposed surgical-style mask, accompanied by one of four audio conditions (either unfiltered audio, or audio-filtered to simulate the attenuation associated with a surgical mask, an FFP3 mask, or a visor). Hypotheses and analyses were pre-registered. Both the audio and visual variables had a statistically significant negative impact across all three dimensions. Whether or not talkers' faces were visible made the largest contribution to participants' ratings. The study identifies a degree of attenuation whose negative effects can be overcome by the restoration of visual cues. The significant effects observed in this nominally low-demand task (speech in quiet) highlight the importance of the visual and audio cues in everyday life and that their consideration should be included in future face mask designs.


Asunto(s)
Señales (Psicología) , Percepción del Habla , Humanos , Adulto , Femenino , Masculino , Adulto Joven , Percepción del Habla/fisiología , Percepción Visual/fisiología , Máscaras , Adolescente , Habla/fisiología , Comunicación , Persona de Mediana Edad , Reconocimiento Facial/fisiología
15.
eNeuro ; 11(5)2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38658138

RESUMEN

More and more patients worldwide are diagnosed with dementia, which emphasizes the urgent need for early detection markers. In this study, we built on the auditory hypersensitivity theory of a previous study-which postulated that responses to auditory input in the subcortex as well as cortex are enhanced in cognitive decline-and examined auditory encoding of natural continuous speech at both neural levels for its indicative potential for cognitive decline. We recruited study participants aged 60 years and older, who were divided into two groups based on the Montreal Cognitive Assessment, one group with low scores (n = 19, participants with signs of cognitive decline) and a control group (n = 25). Participants completed an audiometric assessment and then we recorded their electroencephalography while they listened to an audiobook and click sounds. We derived temporal response functions and evoked potentials from the data and examined response amplitudes for their potential to predict cognitive decline, controlling for hearing ability and age. Contrary to our expectations, no evidence of auditory hypersensitivity was observed in participants with signs of cognitive decline; response amplitudes were comparable in both cognitive groups. Moreover, the combination of response amplitudes showed no predictive value for cognitive decline. These results challenge the proposed hypothesis and emphasize the need for further research to identify reliable auditory markers for the early detection of cognitive decline.


Asunto(s)
Disfunción Cognitiva , Electroencefalografía , Potenciales Evocados Auditivos , Humanos , Femenino , Masculino , Anciano , Disfunción Cognitiva/fisiopatología , Disfunción Cognitiva/diagnóstico , Persona de Mediana Edad , Potenciales Evocados Auditivos/fisiología , Percepción del Habla/fisiología , Anciano de 80 o más Años , Corteza Cerebral/fisiología , Corteza Cerebral/fisiopatología , Estimulación Acústica , Habla/fisiología
16.
Sci Rep ; 14(1): 9617, 2024 04 26.
Artículo en Inglés | MEDLINE | ID: mdl-38671062

RESUMEN

Brain-computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant's voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs.


Asunto(s)
Esclerosis Amiotrófica Lateral , Interfaces Cerebro-Computador , Habla , Humanos , Esclerosis Amiotrófica Lateral/fisiopatología , Esclerosis Amiotrófica Lateral/terapia , Masculino , Habla/fisiología , Persona de Mediana Edad , Electrodos Implantados , Electrocorticografía
17.
J Neural Eng ; 21(3)2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38648782

RESUMEN

Objective.Brain-computer interfaces (BCIs) have the potential to reinstate lost communication faculties. Results from speech decoding studies indicate that a usable speech BCI based on activity in the sensorimotor cortex (SMC) can be achieved using subdurally implanted electrodes. However, the optimal characteristics for a successful speech implant are largely unknown. We address this topic in a high field blood oxygenation level dependent functional magnetic resonance imaging (fMRI) study, by assessing the decodability of spoken words as a function of hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal-axis.Approach.Twelve subjects conducted a 7T fMRI experiment in which they pronounced 6 different pseudo-words over 6 runs. We divided the SMC by hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal axis. Classification was performed on in these SMC areas using multiclass support vector machine (SVM).Main results.Significant classification was possible from the SMC, but no preference for the left or right hemisphere, nor for the precentral or postcentral gyrus for optimal word classification was detected. Classification while using information from the cortical surface was slightly better than when using information from deep in the central sulcus and was highest within the ventral 50% of SMC. Confusion matrices where highly similar across the entire SMC. An SVM-searchlight analysis revealed significant classification in the superior temporal gyrus and left planum temporale in addition to the SMC.Significance.The current results support a unilateral implant using surface electrodes, covering the ventral 50% of the SMC. The added value of depth electrodes is unclear. We did not observe evidence for variations in the qualitative nature of information across SMC. The current results need to be confirmed in paralyzed patients performing attempted speech.


Asunto(s)
Interfaces Cerebro-Computador , Imagen por Resonancia Magnética , Habla , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Adulto , Femenino , Habla/fisiología , Adulto Joven , Electrodos Implantados , Mapeo Encefálico/métodos
18.
J Speech Lang Hear Res ; 67(5): 1400-1412, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38573836

RESUMEN

PURPOSE: We compare two signal smoothing and differentiation approaches: a frequently used approach in the speech community of digital filtering with approximation of derivatives by finite differences and a spline smoothing approach widely used in other fields of human movement science. METHOD: In particular, we compare the values of a classic set of kinematic parameters estimated by the two smoothing approaches and assess, via regressions, how well these reconstructed values conform to known laws about relations between the parameters. RESULTS: Substantially smaller regression errors were observed for the spline smoothing than for the filtering approach. CONCLUSION: This result is in broad agreement with reports from other fields of movement science and underpins the superiority of splines also in the domain of speech.


Asunto(s)
Habla , Humanos , Fenómenos Biomecánicos , Habla/fisiología , Análisis de Regresión , Procesamiento de Señales Asistido por Computador
19.
J Speech Lang Hear Res ; 67(5): 1424-1460, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38593006

RESUMEN

PURPOSE: The oral structures such as the tongue and lips have remarkable somatosensory capacities, but understanding the roles of somatosensation in speech production requires a more comprehensive knowledge of somatosensation in the speech production system in its entirety, including the respiratory, laryngeal, and supralaryngeal subsystems. This review was conducted to summarize the system-wide somatosensory information available for speech production. METHOD: The search was conducted with PubMed/Medline and Google Scholar for articles published until November 2023. Numerous search terms were used in conducting the review, which covered the topics of psychophysics, basic and clinical behavioral research, neuroanatomy, and neuroscience. RESULTS AND CONCLUSIONS: The current understanding of speech somatosensation rests primarily on the two pillars of psychophysics and neuroscience. The confluence of polymodal afferent streams supports the development, maintenance, and refinement of speech production. Receptors are both canonical and noncanonical, with the latter occurring especially in the muscles innervated by the facial nerve. Somatosensory representation in the cortex is disproportionately large and provides for sensory interactions. Speech somatosensory function is robust over the lifespan, with possible declines in advanced aging. The understanding of somatosensation in speech disorders is largely disconnected from research and theory on speech production. A speech somatoscape is proposed as the generalized, system-wide sensation of speech production, with implications for speech development, speech motor control, and speech disorders.


Asunto(s)
Habla , Humanos , Habla/fisiología , Labio/fisiología , Lengua/fisiología
20.
J Speech Lang Hear Res ; 67(5): 1370-1384, 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38619435

RESUMEN

OBJECTIVES: The study aimed to investigate the predictive potential of language environment and vocal development status measures obtained through integrated analysis of Language ENvironment Analysis (LENA) recordings during the prelinguistic stage for subsequent speech and language development in Korean-acquiring children. Specifically, this study explored whether measures from both LENA-automated analysis and human coding at 6-8 months and 12-14 months of age predict vocabulary and phonological development at 18-20 months. METHOD: One-day home recordings from 20 children were collected using a LENA recorder at 6-8 months, 12-14 months, and 18-20 months. Both LENA-automated measures and measures from human coding were obtained from recordings at 6-8 months and 12-14 months. The number of different words, consonant inventory, and utterance structure inventory were identified from recordings of 18-20 months. Correlation and multiple regression analyses were performed to investigate whether measures related to early language environment and child vocalization at 6-8 months and 12-14 months were predictive of vocabulary and phonological measures at 18-20 months. RESULTS: The results showed that the two main LENA-automated measures, conversational turn count (CTC) and child vocalization count, were positively correlated with all vocabulary and phonological measures at 18-20 months. Multiple regression analysis revealed that CTC during the prelinguistic stages was the most significant predictor of a number of different words, consonant inventory, and utterance structure inventory at 18-20 months. Also, adult word count in LENA-automated measures, child-directed speech ratio, and canonical babbling ratio measured by human coding significantly predicted some vocabulary and phonological measures at 18-20 months. CONCLUSION: This study highlights the multifaceted nature of language acquisition and collectively emphasizes the value of considering both quantitative and qualitative aspects of language input to understand early language development in children.


Asunto(s)
Lenguaje Infantil , Desarrollo del Lenguaje , Habla , Vocabulario , Humanos , Masculino , Femenino , Lactante , Habla/fisiología , Fonética , Medición de la Producción del Habla/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA