Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
J Cogn Neurosci ; 32(10): 2001-2012, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32662731

RESUMO

A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80-93, 2014]. In that study, participants first heard ambiguous /s/-/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/-/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain.


Assuntos
Percepção da Fala , Fala , Humanos , Idioma , Aprendizagem , Fonética
2.
Sci Rep ; 10(1): 9917, 2020 06 18.
Artigo em Inglês | MEDLINE | ID: mdl-32555256

RESUMO

Predictions of our sensory environment facilitate perception across domains. During speech perception, formal and temporal predictions may be made for phonotactic probability and syllable stress patterns, respectively, contributing to the efficient processing of speech input. The current experiment employed a passive EEG oddball paradigm to probe the neurophysiological processes underlying temporal and formal predictions simultaneously. The component of interest, the mismatch negativity (MMN), is considered a marker for experience-dependent change detection, where its timing and amplitude are indicative of the perceptual system's sensitivity to presented stimuli. We hypothesized that more predictable stimuli (i.e. high phonotactic probability and first syllable stress) would facilitate change detection, indexed by shorter peak latencies or greater peak amplitudes of the MMN. This hypothesis was confirmed for phonotactic probability: high phonotactic probability deviants elicited an earlier MMN than low phonotactic probability deviants. We do not observe a significant modulation of the MMN to variations in syllable stress. Our findings confirm that speech perception is shaped by formal and temporal predictability. This paradigm may be useful to investigate the contribution of implicit processing of statistical regularities during (a)typical language development.


Assuntos
Estimulação Acústica/métodos , Atenção/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados Auditivos , Percepção da Fala/fisiologia , Fala/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Fonética , Adulto Jovem
3.
Sci Rep ; 10(1): 4529, 2020 03 11.
Artigo em Inglês | MEDLINE | ID: mdl-32161310

RESUMO

Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/diagnóstico por imagem , Córtex Cerebral/fisiologia , Imageamento por Ressonância Magnética , Fonação , Fala , Adulto , Conectoma , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Modelos Biológicos , Córtex Motor/fisiologia , Reprodutibilidade dos Testes , Adulto Jovem
4.
Front Hum Neurosci ; 14: 555054, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33408621

RESUMO

About one third of patients with epilepsy have seizures refractory to the medical treatment. Electrical stimulation mapping (ESM) is the gold standard for the identification of "eloquent" areas prior to resection of epileptogenic tissue. However, it is time-consuming and may cause undesired side effects. Broadband gamma activity (55-200 Hz) recorded with extraoperative electrocorticography (ECoG) during cognitive tasks may be an alternative to ESM but until now has not proven of definitive clinical value. Considering their role in cognition, the alpha (8-12 Hz) and beta (15-25 Hz) bands could further improve the identification of eloquent cortex. We compared gamma, alpha and beta activity, and their combinations for the identification of eloquent cortical areas defined by ESM. Ten patients with intractable focal epilepsy (age: 35.9 ± 9.1 years, range: 22-48, 8 females, 9 right handed) participated in a delayed-match-to-sample task, where syllable sounds were compared to visually presented letters. We used a generalized linear model (GLM) approach to find the optimal weighting of each band for predicting ESM-defined categories and estimated the diagnostic ability by calculating the area under the receiver operating characteristic (ROC) curve. Gamma activity increased more in eloquent than in non-eloquent areas, whereas alpha and beta power decreased more in eloquent areas. Diagnostic ability of each band was close to 0.7 for all bands but depended on multiple factors including the time period of the cognitive task, the location of the electrodes and the patient's degree of attention to the stimulus. We show that diagnostic ability can be increased by 3-5% by combining gamma and alpha and by 7.5-11% when gamma and beta were combined. We then show how ECoG power modulation from cognitive testing can be used to map the probability of eloquence in individual patients and how this probability map can be used in clinical settings to optimize ESM planning. We conclude that the combination of gamma and beta power modulation during cognitive testing can contribute to the identification of eloquent areas prior to ESM in patients with refractory focal epilepsy.

5.
eNeuro ; 5(2)2018.
Artigo em Inglês | MEDLINE | ID: mdl-29610768

RESUMO

Sensorimotor integration, the translation between acoustic signals and motoric programs, may constitute a crucial mechanism for speech. During speech perception, the acoustic-motoric translations include the recruitment of cortical areas for the representation of speech articulatory features, such as place of articulation. Selective attention can shape the processing and performance of speech perception tasks. Whether and where sensorimotor integration takes place during attentive speech perception remains to be explored. Here, we investigate articulatory feature representations of spoken consonant-vowel (CV) syllables during two distinct tasks. Fourteen healthy humans attended to either the vowel or the consonant within a syllable in separate delayed-match-to-sample tasks. Single-trial fMRI blood oxygenation level-dependent (BOLD) responses from perception periods were analyzed using multivariate pattern classification and a searchlight approach to reveal neural activation patterns sensitive to the processing of place of articulation (i.e., bilabial/labiodental vs. alveolar). To isolate place of articulation representation from acoustic covariation, we applied a cross-decoding (generalization) procedure across distinct features of manner of articulation (i.e., stop, fricative, and nasal). We found evidence for the representation of place of articulation across tasks and in both tasks separately: for attention to vowels, generalization maps included bilateral clusters of superior and posterior temporal, insular, and frontal regions; for attention to consonants, generalization maps encompassed clusters in temporoparietal, insular, and frontal regions within the right hemisphere only. Our results specify the cortical representation of place of articulation features generalized across manner of articulation during attentive syllable perception, thus supporting sensorimotor integration during attentive speech perception and demonstrating the value of generalization.


Assuntos
Atenção/fisiologia , Mapeamento Encefálico/métodos , Córtex Cerebral/fisiologia , Psicolinguística , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
6.
Sci Rep ; 7(1): 5143, 2017 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-28698606

RESUMO

Learning to read requires the formation of efficient neural associations between written and spoken language. Whether these associations influence the auditory cortical representation of speech remains unknown. Here we address this question by combining multivariate functional MRI analysis and a newly-developed 'text-based recalibration' paradigm. In this paradigm, the pairing of visual text and ambiguous speech sounds shifts (i.e. recalibrates) the perceptual interpretation of the ambiguous sounds in subsequent auditory-only trials. We show that it is possible to retrieve the text-induced perceptual interpretation from fMRI activity patterns in the posterior superior temporal cortex. Furthermore, this auditory cortical region showed significant functional connectivity with the inferior parietal lobe (IPL) during the pairing of text with ambiguous speech. Our findings indicate that reading-related audiovisual mappings can adjust the auditory cortical representation of speech in typically reading adults. Additionally, they suggest the involvement of the IPL in audiovisual and/or higher-order perceptual processes leading to this adjustment. When applied in typical and dyslexic readers of different ages, our text-based recalibration paradigm may reveal relevant aspects of perceptual learning and plasticity during successful and failing reading development.


Assuntos
Córtex Auditivo/fisiologia , Imageamento por Ressonância Magnética/métodos , Leitura , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Lobo Parietal/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
7.
Dev Cogn Neurosci ; 23: 1-13, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27919003

RESUMO

Reading is a complex cognitive skill subserved by a distributed network of visual and language-related regions. Disruptions of connectivity within this network have been associated with developmental dyslexia but their relation to individual differences in the severity of reading problems remains unclear. Here we investigate whether dysfunctional connectivity scales with the level of reading dysfluency by examining EEG recordings during visual word and false font processing in 9-year-old typically reading children (TR) and two groups of dyslexic children: severely dysfluent (SDD) and moderately dysfluent (MDD) dyslexics. Results indicated weaker occipital to inferior-temporal connectivity for words in both dyslexic groups relative to TRs. Furthermore, SDDs exhibited stronger connectivity from left central to right inferior-temporal and occipital sites for words relative to TRs, and for false fonts relative to both MDDs and TRs. Importantly, reading fluency was positively related with forward and negatively with backward connectivity. Our results suggest disrupted visual processing of words in both dyslexic groups, together with a compensatory recruitment of right posterior brain regions especially in the SDDs during word and false font processing. Functional connectivity in the brain's reading network may thus depend on the level of reading dysfluency beyond group differences between dyslexic and typical readers.


Assuntos
Encéfalo/fisiopatologia , Dislexia/fisiopatologia , Rede Nervosa/fisiopatologia , Estimulação Luminosa/métodos , Leitura , Mapeamento Encefálico/métodos , Criança , Feminino , Humanos , Idioma , Masculino , Distribuição Aleatória , Tempo de Reação/fisiologia
8.
J Neurosci ; 35(45): 15015-25, 2015 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-26558773

RESUMO

The brain's circuitry for perceiving and producing speech may show a notable level of overlap that is crucial for normal development and behavior. The extent to which sensorimotor integration plays a role in speech perception remains highly controversial, however. Methodological constraints related to experimental designs and analysis methods have so far prevented the disentanglement of neural responses to acoustic versus articulatory speech features. Using a passive listening paradigm and multivariate decoding of single-trial fMRI responses to spoken syllables, we investigated brain-based generalization of articulatory features (place and manner of articulation, and voicing) beyond their acoustic (surface) form in adult human listeners. For example, we trained a classifier to discriminate place of articulation within stop syllables (e.g., /pa/ vs /ta/) and tested whether this training generalizes to fricatives (e.g., /fa/ vs /sa/). This novel approach revealed generalization of place and manner of articulation at multiple cortical levels within the dorsal auditory pathway, including auditory, sensorimotor, motor, and somatosensory regions, suggesting the representation of sensorimotor information. Additionally, generalization of voicing included the right anterior superior temporal sulcus associated with the perception of human voices as well as somatosensory regions bilaterally. Our findings highlight the close connection between brain systems for speech perception and production, and in particular, indicate the availability of articulatory codes during passive speech perception. SIGNIFICANCE STATEMENT: Sensorimotor integration is central to verbal communication and provides a link between auditory signals of speech perception and motor programs of speech production. It remains highly controversial, however, to what extent the brain's speech perception system actively uses articulatory (motor), in addition to acoustic/phonetic, representations. In this study, we examine the role of articulatory representations during passive listening using carefully controlled stimuli (spoken syllables) in combination with multivariate fMRI decoding. Our approach enabled us to disentangle brain responses to acoustic and articulatory speech properties. In particular, it revealed articulatory-specific brain responses of speech at multiple cortical levels, including auditory, sensorimotor, and motor regions, suggesting the representation of sensorimotor information during passive speech perception.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Imageamento por Ressonância Magnética/métodos , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Masculino
9.
Front Psychol ; 6: 71, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25705197

RESUMO

Spoken word recognition and production require fast transformations between acoustic, phonological, and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representations in the brain. We acquired EEG data while Dutch subjects, highly proficient in English listened to four monosyllabic and acoustically distinct animal words in both languages (e.g., "paard"-"horse"). Multivariate pattern analysis (MVPA) was applied to identify EEG response patterns that discriminate between individual words within one language (within-language discrimination) and generalize meaning across two languages (across-language generalization). Furthermore, employing two EEG feature selection approaches, we assessed the contribution of temporal and oscillatory EEG features to our classification results. MVPA revealed that within-language discrimination was possible in a broad time-window (~50-620 ms) after word onset probably reflecting acoustic-phonetic and semantic-conceptual differences between the words. Most interestingly, significant across-language generalization was possible around 550-600 ms, suggesting the activation of common semantic-conceptual representations from the Dutch and English nouns. Both types of classification, showed a strong contribution of oscillations below 12 Hz, indicating the importance of low frequency oscillations in the neural representation of individual words and concepts. This study demonstrates the feasibility of MVPA to decode individual spoken words from EEG responses and to assess the spectro-temporal dynamics of their language invariant semantic-conceptual representations. We discuss how this method and results could be relevant to track the neural mechanisms underlying conceptual encoding in comprehension and production.

10.
Rev Port Cardiol ; 23(5): 671-81, 2004 May.
Artigo em Inglês, Português | MEDLINE | ID: mdl-15279452

RESUMO

Coronary artery anomalies, although less frequent than congenital anomalies of the heart chambers and valve morphology, should be considered in a wide range of ages, in both sexes and as a possible etiology in myocardial ischemia, infarction, and sudden death, as well as in the planning of heart surgery for coronary revascularization, correction of congenital heart malformations or valve replacement. Between January 1996 and June 2002 we reviewed our catheterization database and carried out a retrospective study of the 3660 angiographies performed in our cardiology department. The patients were referred for positive ischemic test, acute coronary syndrome and/or valvular heart disease. From the 3660 angiographies we identified 25 patients (0.68%) with coronary artery anomalies and report the prevalence and types of these anomalies in the population studied. We also assessed the presence of coronary artery disease.


Assuntos
Angiografia Coronária , Anomalias dos Vasos Coronários/diagnóstico por imagem , Anomalias dos Vasos Coronários/epidemiologia , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA