Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters

Database
Language
Publication year range
1.
Cogn Neuropsychol ; 36(3-4): 158-166, 2019.
Article in English | MEDLINE | ID: mdl-29786470

ABSTRACT

Music and speech are human-specific behaviours that share numerous properties, including the fine motor skills required to produce them. Given these similarities, previous work has suggested that music and speech may at least partially share neural substrates. To date, much of this work has focused on perception, and has not investigated the neural basis of production, particularly in trained musicians. Here, we report two rare cases of musicians undergoing neurosurgical procedures, where it was possible to directly stimulate the left hemisphere cortex during speech and piano/guitar music production tasks. We found that stimulation to left inferior frontal cortex, including pars opercularis and ventral pre-central gyrus, caused slowing and arrest for both speech and music, and note sequence errors for music. Stimulation to posterior superior temporal cortex only caused production errors during speech. These results demonstrate partially dissociable networks underlying speech and music production, with a shared substrate in frontal regions.


Subject(s)
Brain Mapping/methods , Music/psychology , Speech/physiology , Temporal Lobe/physiopathology , Adolescent , Adult , Humans , Male
2.
Cereb Cortex ; 28(12): 4222-4233, 2018 12 01.
Article in English | MEDLINE | ID: mdl-29088345

ABSTRACT

Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Gamma Rhythm , Imagination/physiology , Music , Neurons/physiology , Acoustic Stimulation , Brain Mapping/methods , Evoked Potentials, Auditory , Feedback, Sensory , Humans
SELECTION OF CITATIONS
SEARCH DETAIL