Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Cell ; 174(1): 21-31.e9, 2018 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-29958109

RESUMO

In speech, the highly flexible modulation of vocal pitch creates intonation patterns that speakers use to convey linguistic meaning. This human ability is unique among primates. Here, we used high-density cortical recordings directly from the human brain to determine the encoding of vocal pitch during natural speech. We found neural populations in bilateral dorsal laryngeal motor cortex (dLMC) that selectively encoded produced pitch but not non-laryngeal articulatory movements. This neural population controlled short pitch accents to express prosodic emphasis on a word in a sentence. Other larynx cortical representations controlling voicing and longer pitch phrase contours were found at separate sites. dLMC sites also encoded vocal pitch during a non-speech singing task. Finally, direct focal stimulation of dLMC evoked laryngeal movements and involuntary vocalization, confirming its causal role in feedforward control. Together, these results reveal the neural basis for the voluntary control of vocal pitch in human speech. VIDEO ABSTRACT.


Assuntos
Laringe/fisiologia , Córtex Motor/fisiologia , Fala , Adolescente , Adulto , Mapeamento Encefálico , Eletrocorticografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Biológicos , Adulto Jovem
2.
Nature ; 626(7999): 593-602, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38093008

RESUMO

Understanding the neural basis of speech perception requires that we study the human brain both at the scale of the fundamental computational unit of neurons and in their organization across the depth of cortex. Here we used high-density Neuropixels arrays1-3 to record from 685 neurons across cortical layers at nine sites in a high-level auditory region that is critical for speech, the superior temporal gyrus4,5, while participants listened to spoken sentences. Single neurons encoded a wide range of speech sound cues, including features of consonants and vowels, relative vocal pitch, onsets, amplitude envelope and sequence statistics. Neurons at each cross-laminar recording exhibited dominant tuning to a primary speech feature while also containing a substantial proportion of neurons that encoded other features contributing to heterogeneous selectivity. Spatially, neurons at similar cortical depths tended to encode similar speech features. Activity across all cortical layers was predictive of high-frequency field potentials (electrocorticography), providing a neuronal origin for macroelectrode recordings from the cortical surface. Together, these results establish single-neuron tuning across the cortical laminae as an important dimension of speech encoding in human superior temporal gyrus.


Assuntos
Córtex Auditivo , Neurônios , Percepção da Fala , Lobo Temporal , Humanos , Estimulação Acústica , Córtex Auditivo/citologia , Córtex Auditivo/fisiologia , Neurônios/fisiologia , Fonética , Fala , Percepção da Fala/fisiologia , Lobo Temporal/citologia , Lobo Temporal/fisiologia , Sinais (Psicologia) , Eletrodos
3.
Proc Natl Acad Sci U S A ; 118(36)2021 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-34475209

RESUMO

Adults can learn to identify nonnative speech sounds with training, albeit with substantial variability in learning behavior. Increases in behavioral accuracy are associated with increased separability for sound representations in cortical speech areas. However, it remains unclear whether individual auditory neural populations all show the same types of changes with learning, or whether there are heterogeneous encoding patterns. Here, we used high-resolution direct neural recordings to examine local population response patterns, while native English listeners learned to recognize unfamiliar vocal pitch patterns in Mandarin Chinese tones. We found a distributed set of neural populations in bilateral superior temporal gyrus and ventrolateral frontal cortex, where the encoding of Mandarin tones changed throughout training as a function of trial-by-trial accuracy ("learning effect"), including both increases and decreases in the separability of tones. These populations were distinct from populations that showed changes as a function of exposure to the stimuli regardless of trial-by-trial accuracy. These learning effects were driven in part by more variable neural responses to repeated presentations of acoustically identical stimuli. Finally, learning effects could be predicted from speech-evoked activity even before training, suggesting that intrinsic properties of these populations make them amenable to behavior-related changes. Together, these results demonstrate that nonnative speech sound learning involves a wide array of changes in neural representations across a distributed set of brain regions.


Assuntos
Lobo Frontal/fisiologia , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Encéfalo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Fonética , Percepção da Altura Sonora/fisiologia , Fala/fisiologia , Acústica da Fala , Lobo Temporal/fisiologia
4.
Epilepsia ; 64(12): 3266-3278, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37753856

RESUMO

OBJECTIVE: Cognitive impairment often impacts quality of life in epilepsy even if seizures are controlled. Word-finding difficulty is particularly prevalent and often attributed to etiological (static, baseline) circuit alterations. We sought to determine whether interictal discharges convey significant superimposed contributions to word-finding difficulty in patients, and if so, through which cognitive mechanism(s). METHODS: Twenty-three patients undergoing intracranial monitoring for drug-resistant epilepsy participated in multiple tasks involving word production (auditory naming, short-term verbal free recall, repetition) to probe word-finding difficulty across different cognitive domains. We compared behavioral performance between trials with versus without interictal discharges across six major brain areas and adjusted for intersubject differences using mixed-effects models. We also evaluated for subjective word-finding difficulties through retrospective chart review. RESULTS: Subjective word-finding difficulty was reported by the majority (79%) of studied patients preoperatively. During intracranial recordings, interictal epileptiform discharges (IEDs) in the medial temporal lobe were associated with long-term lexicosemantic memory impairments as indexed by auditory naming (p = .009), in addition to their established impact on short-term verbal memory as indexed by free recall (p = .004). Interictal discharges involving the lateral temporal cortex and lateral frontal cortex were associated with delayed reaction time in the auditory naming task (p = .016 and p = .018), as well as phonological working memory impairments as indexed by repetition reaction time (p = .002). Effects of IEDs across anatomical regions were strongly dependent on their precise timing within the task. SIGNIFICANCE: IEDs appear to act through multiple cognitive mechanisms to form a convergent basis for the debilitating clinical word-finding difficulty reported by patients with epilepsy. This was particularly notable for medial temporal spikes, which are quite common in adult focal epilepsy. In parallel with the treatment of seizures, the modulation of interictal discharges through emerging pharmacological means and neurostimulation approaches may be an opportunity to help address devastating memory and language impairments in epilepsy.


Assuntos
Epilepsia , Qualidade de Vida , Adulto , Humanos , Estudos Retrospectivos , Eletroencefalografia , Epilepsia/complicações , Convulsões/complicações , Cognição/fisiologia
5.
J Neurosci ; 38(12): 2955-2966, 2018 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-29439164

RESUMO

During speech production, we make vocal tract movements with remarkable precision and speed. Our understanding of how the human brain achieves such proficient control is limited, in part due to the challenge of simultaneously acquiring high-resolution neural recordings and detailed vocal tract measurements. To overcome this challenge, we combined ultrasound and video monitoring of the supralaryngeal articulators (lips, jaw, and tongue) with electrocorticographic recordings from the cortical surface of 4 subjects (3 female, 1 male) to investigate how neural activity in the ventral sensory-motor cortex (vSMC) relates to measured articulator movement kinematics (position, speed, velocity, acceleration) during the production of English vowels. We found that high-gamma activity at many individual vSMC electrodes strongly encoded the kinematics of one or more articulators, but less so for vowel formants and vowel identity. Neural population decoding methods further revealed the structure of kinematic features that distinguish vowels. Encoding of articulator kinematics was sparsely distributed across time and primarily occurred during the time of vowel onset and offset. In contrast, encoding was low during the steady-state portion of the vowel, despite sustained neural activity at some electrodes. Significant representations were found for all kinematic parameters, but speed was the most robust. These findings enabled by direct vocal tract monitoring demonstrate novel insights into the representation of articulatory kinematic parameters encoded in the vSMC during speech production.SIGNIFICANCE STATEMENT Speaking requires precise control and coordination of the vocal tract articulators (lips, jaw, and tongue). Despite the impressive proficiency with which humans move these articulators during speech production, our understanding of how the brain achieves such control is rudimentary, in part because the movements themselves are difficult to observe. By simultaneously measuring speech movements and the neural activity that gives rise to them, we demonstrate how neural activity in sensorimotor cortex produces complex, coordinated movements of the vocal tract.


Assuntos
Arcada Osseodentária/fisiologia , Lábio/fisiologia , Movimento/fisiologia , Córtex Sensório-Motor/fisiologia , Fala/fisiologia , Língua/fisiologia , Adulto , Fenômenos Biomecânicos , Feminino , Humanos , Masculino
6.
Cogn Neuropsychol ; 36(3-4): 158-166, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-29786470

RESUMO

Music and speech are human-specific behaviours that share numerous properties, including the fine motor skills required to produce them. Given these similarities, previous work has suggested that music and speech may at least partially share neural substrates. To date, much of this work has focused on perception, and has not investigated the neural basis of production, particularly in trained musicians. Here, we report two rare cases of musicians undergoing neurosurgical procedures, where it was possible to directly stimulate the left hemisphere cortex during speech and piano/guitar music production tasks. We found that stimulation to left inferior frontal cortex, including pars opercularis and ventral pre-central gyrus, caused slowing and arrest for both speech and music, and note sequence errors for music. Stimulation to posterior superior temporal cortex only caused production errors during speech. These results demonstrate partially dissociable networks underlying speech and music production, with a shared substrate in frontal regions.


Assuntos
Mapeamento Encefálico/métodos , Música/psicologia , Fala/fisiologia , Lobo Temporal/fisiopatologia , Adolescente , Adulto , Humanos , Masculino
7.
Cereb Cortex ; 28(12): 4222-4233, 2018 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-29088345

RESUMO

Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Ritmo Gama , Imaginação/fisiologia , Música , Neurônios/fisiologia , Estimulação Acústica , Mapeamento Encefálico/métodos , Potenciais Evocados Auditivos , Retroalimentação Sensorial , Humanos
8.
Neuroimage ; 153: 273-282, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28396294

RESUMO

Direct intracranial recording of human brain activity is an important approach for deciphering neural mechanisms of cognition. Such recordings, usually made in patients with epilepsy undergoing inpatient monitoring for seizure localization, are limited in duration and depend on patients' tolerance for the challenges associated with recovering from brain surgery. Thus, typical intracranial recordings, similar to most non-invasive approaches in humans, provide snapshots of brain activity in acute, highly constrained settings, limiting opportunities to understand long timescale and natural, real-world phenomena. A new device for treating some forms of drug-resistant epilepsy, the NeuroPace RNS® System, includes a cranially-implanted neurostimulator and intracranial electrodes that continuously monitor brain activity and respond to incipient seizures with electrical counterstimulation. The RNS System can record epileptic brain activity over years, but whether it can record meaningful, behavior-related physiological responses has not been demonstrated. Here, in a human subject with electrodes implanted over high-level speech-auditory cortex (Wernicke's area; posterior superior temporal gyrus), we report that cortical evoked responses to spoken sentences are robust, selective to phonetic features, and stable over nearly 1.5 years. In a second subject with RNS System electrodes implanted over frontal cortex (Broca's area, posterior inferior frontal gyrus), we found that word production during a naming task reliably evokes cortical responses preceding speech onset. The spatiotemporal resolution, high signal-to-noise, and wireless nature of this system's intracranial recordings make it a powerful new approach to investigate the neural correlates of human cognition over long timescales in natural ambulatory settings.


Assuntos
Eletroencefalografia/métodos , Potenciais Evocados , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adolescente , Adulto , Eletrodos Implantados , Feminino , Ritmo Gama , Humanos , Neuroestimuladores Implantáveis , Telemetria , Tecnologia sem Fio
9.
Cereb Cortex ; 26(3): 1015-26, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25410427

RESUMO

One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience.


Assuntos
Encéfalo/crescimento & desenvolvimento , Encéfalo/fisiopatologia , Surdez/fisiopatologia , Desenvolvimento da Linguagem , Língua de Sinais , Adolescente , Mapeamento Encefálico , Período Crítico Psicológico , Surdez/psicologia , Surdez/reabilitação , Feminino , Humanos , Estudos Longitudinais , Magnetoencefalografia , Masculino , Priming de Repetição/fisiologia , Semântica , Percepção Visual/fisiologia
10.
J Neurosci ; 35(18): 7203-14, 2015 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-25948269

RESUMO

Sensory processing involves identification of stimulus features, but also integration with the surrounding sensory and cognitive context. Previous work in animals and humans has shown fine-scale sensitivity to context in the form of learned knowledge about the statistics of the sensory environment, including relative probabilities of discrete units in a stream of sequential auditory input. These statistics are a defining characteristic of one of the most important sequential signals humans encounter: speech. For speech, extensive exposure to a language tunes listeners to the statistics of sound sequences. To address how speech sequence statistics are neurally encoded, we used high-resolution direct cortical recordings from human lateral superior temporal cortex as subjects listened to words and nonwords with varying transition probabilities between sound segments. In addition to their sensitivity to acoustic features (including contextual features, such as coarticulation), we found that neural responses dynamically encoded the language-level probability of both preceding and upcoming speech sounds. Transition probability first negatively modulated neural responses, followed by positive modulation of neural responses, consistent with coordinated predictive and retrospective recognition processes, respectively. Furthermore, transition probability encoding was different for real English words compared with nonwords, providing evidence for online interactions with high-order linguistic knowledge. These results demonstrate that sensory processing of deeply learned stimuli involves integrating physical stimulus features with their contextual sequential structure. Despite not being consciously aware of phoneme sequence statistics, listeners use this information to process spoken input and to link low-level acoustic representations with linguistic information about word identity and meaning.


Assuntos
Estimulação Acústica/métodos , Percepção da Fala/fisiologia , Fala/fisiologia , Lobo Temporal/fisiologia , Percepção Auditiva/fisiologia , Eletrodos Implantados , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Estudos Retrospectivos
11.
Cereb Cortex ; 24(10): 2772-83, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23696277

RESUMO

The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing.


Assuntos
Córtex Cerebral/fisiologia , Desenvolvimento da Linguagem , Língua de Sinais , Adolescente , Adulto , Fatores Etários , Período Crítico Psicológico , Surdez , Feminino , Lateralidade Funcional , Humanos , Aprendizagem/fisiologia , Magnetoencefalografia , Masculino , Semântica , Adulto Jovem
12.
Cereb Cortex ; 24(7): 1948-55, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23448869

RESUMO

Recently, our laboratory has shown that the neural mechanisms for encoding lexico-semantic information in adults operate functionally by 12-18 months of age within left frontotemporal cortices (Travis et al., 2011. Spatiotemporal neural dynamics of word understanding in 12- to 18-month-old-infants. Cereb Cortex. 8:1832-1839). However, there is minimal knowledge of the structural changes that occur within these and other cortical regions important for language development. To identify regional structural changes taking place during this important period in infant development, we examined age-related changes in tissue signal properties of gray matter (GM) and white matter (WM) intensity and contrast. T1-weighted surface-based measures were acquired from 12- to 19-month-old infants and analyzed using a general linear model. Significant age effects were observed for GM and WM intensity and contrast within bilateral inferior lateral and anterovental temporal regions, dorsomedial frontal, and superior parietal cortices. Region of interest (ROI) analyses revealed that GM and WM intensity and contrast significantly increased with age within the same left lateral temporal regions shown to generate lexico-semantic activity in infants and adults. These findings suggest that neurophysiological processes supporting linguistic and cognitive behaviors may develop before cellular and structural maturation is complete within associative cortices. These results have important implications for understanding the neurobiological mechanisms relating structural to functional brain development.


Assuntos
Envelhecimento , Córtex Cerebral/fisiologia , Compreensão/fisiologia , Desenvolvimento da Linguagem , Vocabulário , Mapeamento Encefálico , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Lactente , Imageamento por Ressonância Magnética , Masculino
13.
Cereb Cortex ; 24(10): 2679-93, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23680841

RESUMO

How the brain extracts words from auditory signals is an unanswered question. We recorded approximately 150 single and multi-units from the left anterior superior temporal gyrus of a patient during multiple auditory experiments. Against low background activity, 45% of units robustly fired to particular spoken words with little or no response to pure tones, noise-vocoded speech, or environmental sounds. Many units were tuned to complex but specific sets of phonemes, which were influenced by local context but invariant to speaker, and suppressed during self-produced speech. The firing of several units to specific visual letters was correlated with their response to the corresponding auditory phonemes, providing the first direct neural evidence for phonological recoding during reading. Maximal decoding of individual phonemes and words identities was attained using firing rates from approximately 5 neurons within 200 ms after word onset. Thus, neurons in human superior temporal gyrus use sparse spatially organized population encoding of complex acoustic-phonetic features to help recognize auditory and visual words.


Assuntos
Neurônios/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Humanos , Masculino , Fonética
14.
Cereb Cortex ; 23(10): 2370-9, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-22875868

RESUMO

We combined magnetoencephalography (MEG) with magnetic resonance imaging and electrocorticography to separate in anatomy and latency 2 fundamental stages underlying speech comprehension. The first acoustic-phonetic stage is selective for words relative to control stimuli individually matched on acoustic properties. It begins ∼60 ms after stimulus onset and is localized to middle superior temporal cortex. It was replicated in another experiment, but is strongly dissociated from the response to tones in the same subjects. Within the same task, semantic priming of the same words by a related picture modulates cortical processing in a broader network, but this does not begin until ∼217 ms. The earlier onset of acoustic-phonetic processing compared with lexico-semantic modulation was significant in each individual subject. The MEG source estimates were confirmed with intracranial local field potential and high gamma power responses acquired in 2 additional subjects performing the same task. These recordings further identified sites within superior temporal cortex that responded only to the acoustic-phonetic contrast at short latencies, or the lexico-semantic at long. The independence of the early acoustic-phonetic response from semantic context suggests a limited role for lexical feedback in early speech perception.


Assuntos
Encéfalo/fisiologia , Percepção da Fala/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Adulto Jovem
15.
Sci Adv ; 10(7): eadk0010, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38363839

RESUMO

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.


Assuntos
Córtex Auditivo , Música , Humanos , Percepção da Altura Sonora/fisiologia , Córtex Auditivo/fisiologia , Encéfalo/fisiologia , Idioma
16.
bioRxiv ; 2024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38562883

RESUMO

Models of speech perception are centered around a hierarchy in which auditory representations in the thalamus propagate to primary auditory cortex, then to the lateral temporal cortex, and finally through dorsal and ventral pathways to sites in the frontal lobe. However, evidence for short latency speech responses and low-level spectrotemporal representations in frontal cortex raises the question of whether speech-evoked activity in frontal cortex strictly reflects downstream processing from lateral temporal cortex or whether there are direct parallel pathways from the thalamus or primary auditory cortex to the frontal lobe that supplement the traditional hierarchical architecture. Here, we used high-density direct cortical recordings, high-resolution diffusion tractography, and hemodynamic functional connectivity to evaluate for evidence of direct parallel inputs to frontal cortex from low-level areas. We found that neural populations in the frontal lobe show speech-evoked responses that are synchronous or occur earlier than responses in the lateral temporal cortex. These short latency frontal lobe neural populations encode spectrotemporal speech content indistinguishable from spectrotemporal encoding patterns observed in the lateral temporal lobe, suggesting parallel auditory speech representations reaching temporal and frontal cortex simultaneously. This is further supported by white matter tractography and functional connectivity patterns that connect the auditory nucleus of the thalamus (medial geniculate body) and the primary auditory cortex to the frontal lobe. Together, these results support the existence of a robust pathway of parallel inputs from low-level auditory areas to frontal lobe targets and illustrate long-range parallel architecture that works alongside the classical hierarchical speech network model.

17.
J Neurosci ; 32(28): 9700-5, 2012 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-22787055

RESUMO

Congenitally deaf individuals receive little or no auditory input, and when raised by deaf parents, they acquire sign as their native and primary language. We asked two questions regarding how the deaf brain in humans adapts to sensory deprivation: (1) is meaning extracted and integrated from signs using the same classical left hemisphere frontotemporal network used for speech in hearing individuals, and (2) in deafness, is superior temporal cortex encompassing primary and secondary auditory regions reorganized to receive and process visual sensory information at short latencies? Using MEG constrained by individual cortical anatomy obtained with MRI, we examined an early time window associated with sensory processing and a late time window associated with lexicosemantic integration. We found that sign in deaf individuals and speech in hearing individuals activate a highly similar left frontotemporal network (including superior temporal regions surrounding auditory cortex) during lexicosemantic processing, but only speech in hearing individuals activates auditory regions during sensory processing. Thus, neural systems dedicated to processing high-level linguistic information are used for processing language regardless of modality or hearing status, and we do not find evidence for rewiring of afferent connections from visual systems to auditory cortex.


Assuntos
Mapeamento Encefálico , Surdez , Lateralidade Funcional/fisiologia , Semântica , Língua de Sinais , Lobo Temporal/fisiopatologia , Adolescente , Adulto , Surdez/congênito , Surdez/patologia , Surdez/fisiopatologia , Potenciais Evocados/fisiologia , Feminino , Humanos , Campos Magnéticos , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Estimulação Luminosa , Fatores de Tempo , Adulto Jovem
18.
J Speech Lang Hear Res ; 66(10): 3825-3843, 2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37652065

RESUMO

PURPOSE: Subthreshold transcutaneous auricular vagus nerve stimulation (taVNS) synchronized with behavioral training can selectively enhance nonnative speech category learning in adults. Prior work has demonstrated that behavioral performance increases when taVNS is paired with easier-to-learn Mandarin tone categories in native English listeners, relative to when taVNS is paired with harder-to-learn Mandarin tone categories or without taVNS. Mechanistically, this temporally precise plasticity has been attributed to noradrenergic modulation. However, prior work did not specifically utilize methodologies that indexed noradrenergic modulation and, therefore, was unable to explicitly test this hypothesis. Our goal for this study was to use pupillometry to gain mechanistic insights into taVNS behavioral effects. METHOD: Thirty-eight participants learned to categorize Mandarin tones while pupillometry was recorded. In a double-blinded design, participants were divided into two taVNS groups that, as in the prior study, differed according to whether taVNS was paired with easier-to-learn tones or harder-to-learn tones. Learning performance and pupillary responses were measured using linear mixed-effects models. RESULTS: We found that taVNS did not have any tone-specific or group behavioral or pupillary effects. However, in an exploratory analysis, we observed that taVNS did lead to faster rates of learning on trials paired with stimulation, particularly for those who were stimulated at lower amplitudes. CONCLUSIONS: Our results suggest that pupillary responses may not be a reliable marker of locus coeruleus-norepinephrine system activity in humans. However, future research should systematically examine the effects of stimulation amplitude on both behavior and pupillary responses. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.24036666.

19.
bioRxiv ; 2023 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-37905047

RESUMO

Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser: The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.

20.
J Neurosurg Case Lessons ; 5(13)2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-37014023

RESUMO

BACKGROUND: Apraxia of speech is a disorder of speech-motor planning in which articulation is effortful and error-prone despite normal strength of the articulators. Phonological alexia and agraphia are disorders of reading and writing disproportionately affecting unfamiliar words. These disorders are almost always accompanied by aphasia. OBSERVATIONS: A 36-year-old woman underwent resection of a grade IV astrocytoma based in the left middle precentral gyrus, including a cortical site associated with speech arrest during electrocortical stimulation mapping. Following surgery, she exhibited moderate apraxia of speech and difficulty with reading and spelling, both of which improved but persisted 6 months after surgery. A battery of speech and language assessments was administered, revealing preserved comprehension, naming, cognition, and orofacial praxis, with largely isolated deficits in speech-motor planning and the spelling and reading of nonwords. LESSONS: This case describes a specific constellation of speech-motor and written language symptoms-apraxia of speech, phonological agraphia, and phonological alexia in the absence of aphasia-which the authors theorize may be attributable to disruption of a single process of "motor-phonological sequencing." The middle precentral gyrus may play an important role in the planning of motorically complex phonological sequences for production, independent of output modality.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA