Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
bioRxiv ; 2024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-38562883

RESUMEN

Models of speech perception are centered around a hierarchy in which auditory representations in the thalamus propagate to primary auditory cortex, then to the lateral temporal cortex, and finally through dorsal and ventral pathways to sites in the frontal lobe. However, evidence for short latency speech responses and low-level spectrotemporal representations in frontal cortex raises the question of whether speech-evoked activity in frontal cortex strictly reflects downstream processing from lateral temporal cortex or whether there are direct parallel pathways from the thalamus or primary auditory cortex to the frontal lobe that supplement the traditional hierarchical architecture. Here, we used high-density direct cortical recordings, high-resolution diffusion tractography, and hemodynamic functional connectivity to evaluate for evidence of direct parallel inputs to frontal cortex from low-level areas. We found that neural populations in the frontal lobe show speech-evoked responses that are synchronous or occur earlier than responses in the lateral temporal cortex. These short latency frontal lobe neural populations encode spectrotemporal speech content indistinguishable from spectrotemporal encoding patterns observed in the lateral temporal lobe, suggesting parallel auditory speech representations reaching temporal and frontal cortex simultaneously. This is further supported by white matter tractography and functional connectivity patterns that connect the auditory nucleus of the thalamus (medial geniculate body) and the primary auditory cortex to the frontal lobe. Together, these results support the existence of a robust pathway of parallel inputs from low-level auditory areas to frontal lobe targets and illustrate long-range parallel architecture that works alongside the classical hierarchical speech network model.

2.
Sci Adv ; 10(7): eadk0010, 2024 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-38363839

RESUMEN

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.


Asunto(s)
Corteza Auditiva , Música , Humanos , Percepción de la Altura Tonal/fisiología , Corteza Auditiva/fisiología , Encéfalo/fisiología , Lenguaje
3.
Nature ; 626(7999): 593-602, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38093008

RESUMEN

Understanding the neural basis of speech perception requires that we study the human brain both at the scale of the fundamental computational unit of neurons and in their organization across the depth of cortex. Here we used high-density Neuropixels arrays1-3 to record from 685 neurons across cortical layers at nine sites in a high-level auditory region that is critical for speech, the superior temporal gyrus4,5, while participants listened to spoken sentences. Single neurons encoded a wide range of speech sound cues, including features of consonants and vowels, relative vocal pitch, onsets, amplitude envelope and sequence statistics. Neurons at each cross-laminar recording exhibited dominant tuning to a primary speech feature while also containing a substantial proportion of neurons that encoded other features contributing to heterogeneous selectivity. Spatially, neurons at similar cortical depths tended to encode similar speech features. Activity across all cortical layers was predictive of high-frequency field potentials (electrocorticography), providing a neuronal origin for macroelectrode recordings from the cortical surface. Together, these results establish single-neuron tuning across the cortical laminae as an important dimension of speech encoding in human superior temporal gyrus.


Asunto(s)
Corteza Auditiva , Neuronas , Percepción del Habla , Lóbulo Temporal , Humanos , Estimulación Acústica , Corteza Auditiva/citología , Corteza Auditiva/fisiología , Neuronas/fisiología , Fonética , Habla , Percepción del Habla/fisiología , Lóbulo Temporal/citología , Lóbulo Temporal/fisiología , Señales (Psicología) , Electrodos
4.
bioRxiv ; 2023 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-37905047

RESUMEN

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser: The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.

5.
Epilepsia ; 64(12): 3266-3278, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37753856

RESUMEN

OBJECTIVE: Cognitive impairment often impacts quality of life in epilepsy even if seizures are controlled. Word-finding difficulty is particularly prevalent and often attributed to etiological (static, baseline) circuit alterations. We sought to determine whether interictal discharges convey significant superimposed contributions to word-finding difficulty in patients, and if so, through which cognitive mechanism(s). METHODS: Twenty-three patients undergoing intracranial monitoring for drug-resistant epilepsy participated in multiple tasks involving word production (auditory naming, short-term verbal free recall, repetition) to probe word-finding difficulty across different cognitive domains. We compared behavioral performance between trials with versus without interictal discharges across six major brain areas and adjusted for intersubject differences using mixed-effects models. We also evaluated for subjective word-finding difficulties through retrospective chart review. RESULTS: Subjective word-finding difficulty was reported by the majority (79%) of studied patients preoperatively. During intracranial recordings, interictal epileptiform discharges (IEDs) in the medial temporal lobe were associated with long-term lexicosemantic memory impairments as indexed by auditory naming (p = .009), in addition to their established impact on short-term verbal memory as indexed by free recall (p = .004). Interictal discharges involving the lateral temporal cortex and lateral frontal cortex were associated with delayed reaction time in the auditory naming task (p = .016 and p = .018), as well as phonological working memory impairments as indexed by repetition reaction time (p = .002). Effects of IEDs across anatomical regions were strongly dependent on their precise timing within the task. SIGNIFICANCE: IEDs appear to act through multiple cognitive mechanisms to form a convergent basis for the debilitating clinical word-finding difficulty reported by patients with epilepsy. This was particularly notable for medial temporal spikes, which are quite common in adult focal epilepsy. In parallel with the treatment of seizures, the modulation of interictal discharges through emerging pharmacological means and neurostimulation approaches may be an opportunity to help address devastating memory and language impairments in epilepsy.


Asunto(s)
Epilepsia , Calidad de Vida , Adulto , Humanos , Estudios Retrospectivos , Electroencefalografía , Epilepsia/complicaciones , Convulsiones/complicaciones , Cognición/fisiología
6.
J Speech Lang Hear Res ; 66(10): 3825-3843, 2023 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-37652065

RESUMEN

PURPOSE: Subthreshold transcutaneous auricular vagus nerve stimulation (taVNS) synchronized with behavioral training can selectively enhance nonnative speech category learning in adults. Prior work has demonstrated that behavioral performance increases when taVNS is paired with easier-to-learn Mandarin tone categories in native English listeners, relative to when taVNS is paired with harder-to-learn Mandarin tone categories or without taVNS. Mechanistically, this temporally precise plasticity has been attributed to noradrenergic modulation. However, prior work did not specifically utilize methodologies that indexed noradrenergic modulation and, therefore, was unable to explicitly test this hypothesis. Our goal for this study was to use pupillometry to gain mechanistic insights into taVNS behavioral effects. METHOD: Thirty-eight participants learned to categorize Mandarin tones while pupillometry was recorded. In a double-blinded design, participants were divided into two taVNS groups that, as in the prior study, differed according to whether taVNS was paired with easier-to-learn tones or harder-to-learn tones. Learning performance and pupillary responses were measured using linear mixed-effects models. RESULTS: We found that taVNS did not have any tone-specific or group behavioral or pupillary effects. However, in an exploratory analysis, we observed that taVNS did lead to faster rates of learning on trials paired with stimulation, particularly for those who were stimulated at lower amplitudes. CONCLUSIONS: Our results suggest that pupillary responses may not be a reliable marker of locus coeruleus-norepinephrine system activity in humans. However, future research should systematically examine the effects of stimulation amplitude on both behavior and pupillary responses. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.24036666.

7.
J Neurosurg Case Lessons ; 5(13)2023 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-37014023

RESUMEN

BACKGROUND: Apraxia of speech is a disorder of speech-motor planning in which articulation is effortful and error-prone despite normal strength of the articulators. Phonological alexia and agraphia are disorders of reading and writing disproportionately affecting unfamiliar words. These disorders are almost always accompanied by aphasia. OBSERVATIONS: A 36-year-old woman underwent resection of a grade IV astrocytoma based in the left middle precentral gyrus, including a cortical site associated with speech arrest during electrocortical stimulation mapping. Following surgery, she exhibited moderate apraxia of speech and difficulty with reading and spelling, both of which improved but persisted 6 months after surgery. A battery of speech and language assessments was administered, revealing preserved comprehension, naming, cognition, and orofacial praxis, with largely isolated deficits in speech-motor planning and the spelling and reading of nonwords. LESSONS: This case describes a specific constellation of speech-motor and written language symptoms-apraxia of speech, phonological agraphia, and phonological alexia in the absence of aphasia-which the authors theorize may be attributable to disruption of a single process of "motor-phonological sequencing." The middle precentral gyrus may play an important role in the planning of motorically complex phonological sequences for production, independent of output modality.

8.
Neuron ; 110(15): 2409-2421.e3, 2022 08 03.
Artículo en Inglés | MEDLINE | ID: mdl-35679860

RESUMEN

The action potential is a fundamental unit of neural computation. Even though significant advances have been made in recording large numbers of individual neurons in animal models, translation of these methodologies to humans has been limited because of clinical constraints and electrode reliability. Here, we present a reliable method for intraoperative recording of dozens of neurons in humans using the Neuropixels probe, yielding up to ∼100 simultaneously recorded single units. Most single units were active within 1 min of reaching target depth. The motion of the electrode array had a strong inverse correlation with yield, identifying a major challenge and opportunity to further increase the probe utility. Cell pairs active close in time were spatially closer in most recordings, demonstrating the power to resolve complex cortical dynamics. Altogether, this approach provides access to population single-unit activity across the depth of human neocortex at scales previously only accessible in animal models.


Asunto(s)
Neocórtex , Neuronas , Potenciales de Acción/fisiología , Electrodos , Electrodos Implantados , Humanos , Neuronas/fisiología , Reproducibilidad de los Resultados
9.
Sci Rep ; 11(1): 22780, 2021 11 23.
Artículo en Inglés | MEDLINE | ID: mdl-34815529

RESUMEN

Vagus nerve stimulation (VNS) is being used increasingly to treat a wide array of diseases and disorders. This growth is driven in part by the putative ability to stimulate the nerve non-invasively. Despite decades of use and a rapidly expanding application space, we lack a complete understanding of the acute effects of VNS on human cortical neurophysiology. Here, we investigated cortical responses to sub-perceptual threshold cervical implanted (iVNS) and transcutaneous auricular (taVNS) vagus nerve stimulation using intracranial neurophysiological recordings in human epilepsy patients. To understand the areas that are modulated by VNS and how they differ depending on invasiveness and stimulation parameters, we compared VNS-evoked neural activity across a range of stimulation modalities, frequencies, and amplitudes. Using comparable stimulation parameters, both iVNS and taVNS caused subtle changes in low-frequency power across broad cortical networks, which were not the same across modalities and were highly variable across participants. However, within at least some individuals, it may be possible to elicit similar responses across modalities using distinct sets of stimulation parameters. These results demonstrate that both invasive and non-invasive VNS cause evoked changes in activity across a set of highly distributed cortical networks that are relevant to a diverse array of clinical, rehabilitative, and enhancement applications.


Asunto(s)
Corteza Cerebral/fisiología , Epilepsia Refractaria/terapia , Estimulación del Nervio Vago/métodos , Nervio Vago/fisiología , Electroencefalografía , Humanos
10.
Proc Natl Acad Sci U S A ; 118(36)2021 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-34475209

RESUMEN

Adults can learn to identify nonnative speech sounds with training, albeit with substantial variability in learning behavior. Increases in behavioral accuracy are associated with increased separability for sound representations in cortical speech areas. However, it remains unclear whether individual auditory neural populations all show the same types of changes with learning, or whether there are heterogeneous encoding patterns. Here, we used high-resolution direct neural recordings to examine local population response patterns, while native English listeners learned to recognize unfamiliar vocal pitch patterns in Mandarin Chinese tones. We found a distributed set of neural populations in bilateral superior temporal gyrus and ventrolateral frontal cortex, where the encoding of Mandarin tones changed throughout training as a function of trial-by-trial accuracy ("learning effect"), including both increases and decreases in the separability of tones. These populations were distinct from populations that showed changes as a function of exposure to the stimuli regardless of trial-by-trial accuracy. These learning effects were driven in part by more variable neural responses to repeated presentations of acoustically identical stimuli. Finally, learning effects could be predicted from speech-evoked activity even before training, suggesting that intrinsic properties of these populations make them amenable to behavior-related changes. Together, these results demonstrate that nonnative speech sound learning involves a wide array of changes in neural representations across a distributed set of brain regions.


Asunto(s)
Lóbulo Frontal/fisiología , Aprendizaje/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Encéfalo/fisiología , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Lenguaje , Masculino , Persona de Mediana Edad , Fonética , Percepción de la Altura Tonal/fisiología , Habla/fisiología , Acústica del Lenguaje , Lóbulo Temporal/fisiología
11.
Curr Biol ; 30(22): 4342-4351.e3, 2020 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-32888480

RESUMEN

The fluent production of a signed language requires exquisite coordination of sensory, motor, and cognitive processes. Similar to speech production, language produced with the hands by fluent signers appears effortless but reflects the precise coordination of both large-scale and local cortical networks. The organization and representational structure of sensorimotor features underlying sign language phonology in these networks remains unknown. Here, we present a unique case study of high-density electrocorticography (ECoG) recordings from the cortical surface of profoundly deaf signer during awake craniotomy. While neural activity was recorded from sensorimotor cortex, the participant produced a large variety of movements in linguistic and transitional movement contexts. We found that at both single electrode and neural population levels, high-gamma activity reflected tuning for particular hand, arm, and face movements, which were organized along dimensions that are relevant for phonology in sign language. Decoding of manual articulatory features revealed a clear functional organization and population dynamics for these highly practiced movements. Furthermore, neural activity clearly differentiated linguistic and transitional movements, demonstrating encoding of language-relevant articulatory features. These results provide a novel and unique view of the fine-scale dynamics of complex and meaningful sensorimotor actions.


Asunto(s)
Corteza Sensoriomotora/fisiología , Lengua de Signos , Mapeo Encefálico/instrumentación , Mapeo Encefálico/métodos , Neoplasias Encefálicas/cirugía , Estimulación Eléctrica , Electrocorticografía/instrumentación , Electrocorticografía/métodos , Electrodos , Glioblastoma/cirugía , Humanos , Lingüística , Masculino , Persona de Mediana Edad , Estudios de Casos Únicos como Asunto , Estados Unidos
12.
NPJ Sci Learn ; 5: 12, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32802406

RESUMEN

Adults struggle to learn non-native speech contrasts even after years of exposure. While laboratory-based training approaches yield learning, the optimal training conditions for maximizing speech learning in adulthood are currently unknown. Vagus nerve stimulation has been shown to prime adult sensory-perceptual systems towards plasticity in animal models. Precise temporal pairing with auditory stimuli can enhance auditory cortical representations with a high degree of specificity. Here, we examined whether sub-perceptual threshold transcutaneous vagus nerve stimulation (tVNS), paired with non-native speech sounds, enhances speech category learning in adults. Twenty-four native English-speakers were trained to identify non-native Mandarin tone categories. Across two groups, tVNS was paired with the tone categories that were easier- or harder-to-learn. A control group received no stimulation but followed an identical thresholding procedure as the intervention groups. We found that tVNS robustly enhanced speech category learning and retention of correct stimulus-response associations, but only when stimulation was paired with the easier-to-learn categories. This effect emerged rapidly, generalized to new exemplars, and was qualitatively different from the normal individual variability observed in hundreds of learners who have performed in the same task without stimulation. Electroencephalography recorded before and after training indicated no evidence of tVNS-induced changes in the sensory representation of auditory stimuli. These results suggest that paired-tVNS induces a temporally precise neuromodulatory signal that selectively enhances the perception and memory consolidation of perceptually salient categories.

13.
Front Hum Neurosci ; 14: 44, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32194384

RESUMEN

Intracranial electroencephalography (IEEG) involves recording from electrodes placed directly onto the cortical surface or deep brain locations. It is performed on patients with medically refractory epilepsy, undergoing pre-surgical seizure localization. IEEG recordings, combined with advancements in computational capacity and analysis tools, have accelerated cognitive neuroscience. This Perspective describes a potential pitfall latent in many of these recordings by virtue of the subject population-namely interictal epileptiform discharges (IEDs), which can cause spurious results due to the contamination of normal neurophysiological signals by pathological waveforms related to epilepsy. We first discuss the nature of IED hazards, and why they deserve the attention of neurophysiology researchers. We then describe four general strategies used when handling IEDs (manual identification, automated identification, manual-automated hybrids, and ignoring by leaving them in the data), and discuss their pros, cons, and contextual factors. Finally, we describe current practices of human neurophysiology researchers worldwide based on a cross-sectional literature review and a voluntary survey. We put these results in the context of the listed strategies and make suggestions on improving awareness and clarity of reporting to enrich both data quality and communication in the field.

14.
Nat Commun ; 10(1): 3096, 2019 07 30.
Artículo en Inglés | MEDLINE | ID: mdl-31363096

RESUMEN

Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.


Asunto(s)
Mapeo Encefálico/métodos , Corteza Cerebral/fisiología , Habla/fisiología , Interfaces Cerebro-Computador , Electrocorticografía/instrumentación , Electrocorticografía/métodos , Electrodos Implantados , Epilepsia/diagnóstico , Epilepsia/fisiopatología , Femenino , Humanos , Factores de Tiempo
15.
Neuron ; 102(6): 1096-1110, 2019 06 19.
Artículo en Inglés | MEDLINE | ID: mdl-31220442

RESUMEN

The human superior temporal gyrus (STG) is critical for extracting meaningful linguistic features from speech input. Local neural populations are tuned to acoustic-phonetic features of all consonants and vowels and to dynamic cues for intonational pitch. These populations are embedded throughout broader functional zones that are sensitive to amplitude-based temporal cues. Beyond speech features, STG representations are strongly modulated by learned knowledge and perceptual goals. Currently, a major challenge is to understand how these features are integrated across space and time in the brain during natural speech comprehension. We present a theory that temporally recurrent connections within STG generate context-dependent phonological representations, spanning longer temporal sequences relevant for coherent percepts of syllables, words, and phrases.


Asunto(s)
Fonética , Acústica del Lenguaje , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Electrocorticografía , Humanos , Vías Nerviosas/fisiología , Análisis Espacio-Temporal
16.
Cogn Neuropsychol ; 36(3-4): 158-166, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-29786470

RESUMEN

Music and speech are human-specific behaviours that share numerous properties, including the fine motor skills required to produce them. Given these similarities, previous work has suggested that music and speech may at least partially share neural substrates. To date, much of this work has focused on perception, and has not investigated the neural basis of production, particularly in trained musicians. Here, we report two rare cases of musicians undergoing neurosurgical procedures, where it was possible to directly stimulate the left hemisphere cortex during speech and piano/guitar music production tasks. We found that stimulation to left inferior frontal cortex, including pars opercularis and ventral pre-central gyrus, caused slowing and arrest for both speech and music, and note sequence errors for music. Stimulation to posterior superior temporal cortex only caused production errors during speech. These results demonstrate partially dissociable networks underlying speech and music production, with a shared substrate in frontal regions.


Asunto(s)
Mapeo Encefálico/métodos , Música/psicología , Habla/fisiología , Lóbulo Temporal/fisiopatología , Adolescente , Adulto , Humanos , Masculino
17.
Brain Lang ; 193: 58-72, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-27450996

RESUMEN

Verbal repetition requires the coordination of auditory, memory, linguistic, and motor systems. To date, the basic dynamics of neural information processing in this deceptively simple behavior are largely unknown. Here, we examined the neural processes underlying verbal repetition using focal interruption (electrocortical stimulation) in 58 patients undergoing awake craniotomies, and neurophysiological recordings (electrocorticography) in 8 patients while they performed a single word repetition task. Electrocortical stimulation revealed that sub-components of the left peri-Sylvian network involved in single word repetition could be differentially interrupted, producing transient perceptual deficits, paraphasic errors, or speech arrest. Electrocorticography revealed the detailed spatio-temporal dynamics of cortical activation, involving a highly-ordered, but overlapping temporal progression of cortical high gamma (75-150Hz) activity throughout the peri-Sylvian cortex. We observed functionally distinct serial and parallel cortical processing corresponding to successive stages of general auditory processing (posterior superior temporal gyrus), speech-specific auditory processing (middle and posterior superior temporal gyrus), working memory (inferior frontal cortex), and motor articulation (sensorimotor cortex). Together, these methods reveal the dynamics of coordinated activity across peri-Sylvian cortex during verbal repetition.


Asunto(s)
Corteza Cerebral/fisiología , Electrocorticografía/métodos , Red Nerviosa/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adulto , Anciano , Mapeo Encefálico/métodos , Corteza Cerebral/diagnóstico por imagen , Cognición/fisiología , Estudios de Cohortes , Estimulación Eléctrica/instrumentación , Estimulación Eléctrica/métodos , Electrocorticografía/instrumentación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Red Nerviosa/diagnóstico por imagen , Percepción del Habla/fisiología
18.
Cell ; 174(1): 21-31.e9, 2018 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-29958109

RESUMEN

In speech, the highly flexible modulation of vocal pitch creates intonation patterns that speakers use to convey linguistic meaning. This human ability is unique among primates. Here, we used high-density cortical recordings directly from the human brain to determine the encoding of vocal pitch during natural speech. We found neural populations in bilateral dorsal laryngeal motor cortex (dLMC) that selectively encoded produced pitch but not non-laryngeal articulatory movements. This neural population controlled short pitch accents to express prosodic emphasis on a word in a sentence. Other larynx cortical representations controlling voicing and longer pitch phrase contours were found at separate sites. dLMC sites also encoded vocal pitch during a non-speech singing task. Finally, direct focal stimulation of dLMC evoked laryngeal movements and involuntary vocalization, confirming its causal role in feedforward control. Together, these results reveal the neural basis for the voluntary control of vocal pitch in human speech. VIDEO ABSTRACT.


Asunto(s)
Laringe/fisiología , Corteza Motora/fisiología , Habla , Adolescente , Adulto , Mapeo Encefálico , Electrocorticografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Biológicos , Adulto Joven
19.
J Neurosci ; 38(12): 2955-2966, 2018 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-29439164

RESUMEN

During speech production, we make vocal tract movements with remarkable precision and speed. Our understanding of how the human brain achieves such proficient control is limited, in part due to the challenge of simultaneously acquiring high-resolution neural recordings and detailed vocal tract measurements. To overcome this challenge, we combined ultrasound and video monitoring of the supralaryngeal articulators (lips, jaw, and tongue) with electrocorticographic recordings from the cortical surface of 4 subjects (3 female, 1 male) to investigate how neural activity in the ventral sensory-motor cortex (vSMC) relates to measured articulator movement kinematics (position, speed, velocity, acceleration) during the production of English vowels. We found that high-gamma activity at many individual vSMC electrodes strongly encoded the kinematics of one or more articulators, but less so for vowel formants and vowel identity. Neural population decoding methods further revealed the structure of kinematic features that distinguish vowels. Encoding of articulator kinematics was sparsely distributed across time and primarily occurred during the time of vowel onset and offset. In contrast, encoding was low during the steady-state portion of the vowel, despite sustained neural activity at some electrodes. Significant representations were found for all kinematic parameters, but speed was the most robust. These findings enabled by direct vocal tract monitoring demonstrate novel insights into the representation of articulatory kinematic parameters encoded in the vSMC during speech production.SIGNIFICANCE STATEMENT Speaking requires precise control and coordination of the vocal tract articulators (lips, jaw, and tongue). Despite the impressive proficiency with which humans move these articulators during speech production, our understanding of how the brain achieves such control is rudimentary, in part because the movements themselves are difficult to observe. By simultaneously measuring speech movements and the neural activity that gives rise to them, we demonstrate how neural activity in sensorimotor cortex produces complex, coordinated movements of the vocal tract.


Asunto(s)
Maxilares/fisiología , Labio/fisiología , Movimiento/fisiología , Corteza Sensoriomotora/fisiología , Habla/fisiología , Lengua/fisiología , Adulto , Fenómenos Biomecánicos , Femenino , Humanos , Masculino
20.
Brain Lang ; 187: 83-91, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29397190

RESUMEN

Auditory speech comprehension is the result of neural computations that occur in a broad network that includes the temporal lobe auditory cortex and the left inferior frontal cortex. It remains unclear how representations in this network differentially contribute to speech comprehension. Here, we recorded high-density direct cortical activity during a sine-wave speech (SWS) listening task to examine detailed neural speech representations when the exact same acoustic input is comprehended versus not comprehended. Listeners heard SWS sentences (pre-exposure), followed by clear versions of the same sentences, which revealed the content of the sounds (exposure), and then the same SWS sentences again (post-exposure). Across all three task phases, high-gamma neural activity in the superior temporal gyrus was similar, distinguishing different words based on bottom-up acoustic features. In contrast, frontal regions showed a more pronounced and sudden increase in activity only when the input was comprehended, which corresponded with stronger representational separability among spatiotemporal activity patterns evoked by different words. We observed this effect only in participants who were not able to comprehend the stimuli during the pre-exposure phase, indicating a relationship between frontal high-gamma activity and speech understanding. Together, these results demonstrate that both frontal and temporal cortical networks are involved in spoken language understanding, and that under certain listening conditions, frontal regions are involved in discriminating speech sounds.


Asunto(s)
Lóbulo Frontal/fisiología , Inteligibilidad del Habla , Percepción del Habla , Lóbulo Temporal/fisiología , Adulto , Conectoma , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...