Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Neurosci ; 39(50): 10096-10103, 2019 12 11.
Artigo em Inglês | MEDLINE | ID: mdl-31699888

RESUMO

We tested the popular, unproven theory that tinnitus is caused by resetting of auditory predictions toward a persistent low-intensity sound. Electroencephalographic mismatch negativity responses, which quantify the violation of sensory predictions, to unattended tinnitus-like sounds were greater in response to upward than downward intensity deviants in 26 unselected chronic tinnitus subjects with normal to severely impaired hearing, and in 15 acute tinnitus subjects, but not in 26 hearing and age-matched controls (p < 0.001, receiver operator characteristic, area under the curve, 0.77), or in 20 healthy and hearing-impaired controls presented with simulated tinnitus. The findings support a prediction resetting model of tinnitus generation, and may form the basis of a convenient tinnitus biomarker, which we name Intensity Mismatch Asymmetry, which is usable across species, is quick and tolerable, and requires no training.SIGNIFICANCE STATEMENT In current models, perception is based around the generation of internal predictions of the environment, which are tested and updated using evidence from the senses. Here, we test the theory that auditory phantom perception (tinnitus) occurs when a default auditory prediction is formed to explain spontaneous activity in the subcortical pathway, rather than ignoring it as noise. We find that chronic tinnitus patients show an abnormal pattern of evoked responses to unexpectedly loud and quiet sounds that both supports this hypothesis and provides fairly accurate classification of tinnitus status at the individual subject level. This approach to objectively demonstrating the predictions underlying pathological perceptual states may also have a much wider utility, for instance, in chronic pain.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Perda Auditiva/fisiopatologia , Zumbido/fisiopatologia , Estimulação Acústica , Adulto , Idoso , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
2.
Hum Brain Mapp ; 36(2): 643-54, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25307551

RESUMO

A major assumption of brain-machine interface research is that patients with disconnected neural pathways can still volitionally recall precise motor commands that could be decoded for naturalistic prosthetic control. However, the disconnected condition of these patients also blocks kinaesthetic feedback from the periphery, which has been shown to regulate centrally generated output responsible for accurate motor control. Here, we tested how well motor commands are generated in the absence of kinaesthetic feedback by decoding hand movements from human scalp electroencephalography in three conditions: unimpaired movement, imagined movement, and movement attempted during temporary disconnection of peripheral afferent and efferent nerves by ischemic nerve block. Our results suggest that the recall of cortical motor commands is impoverished in the absence of kinaesthetic feedback, challenging the possibility of precise naturalistic cortical prosthetic control.


Assuntos
Encéfalo/fisiologia , Retroalimentação Sensorial/fisiologia , Atividade Motora/fisiologia , Punho/fisiologia , Eletroencefalografia , Humanos , Imaginação/fisiologia , Isquemia , Masculino , Bloqueio Nervoso , Processamento de Sinais Assistido por Computador
3.
Q J Exp Psychol (Hove) ; 77(5): 1125-1135, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37710360

RESUMO

In a form priming experiment with a lexical decision task, we investigated whether the representational structure of lexical tone in lexical memory impacts spoken-word recognition in Mandarin. Target monosyllabic words were preceded by five types of primes: (1) the same real words (/lun4/-/lun4/), (2) real words with only tone contrasts (/lun2/-/lun4/), (3) unrelated real words (/pie3/-/lun4/), (4) pseudowords with only tone contrasts (*/lun3/-/lun4/), and (5) unrelated pseudowords (*/tai3/-/lun4/). We found a facilitation effect in target words with pseudoword primes that share the segmental syllable but contrast in tones (*/lun3/-/lun4/). Moreover, no evident form priming effect was observed in target words primed by real words with only tone contrasts (/lun2/-/lun4/). These results suggest that the recognition of a tone word is influenced by the representational level of tone accessed by the prime word. The distinctive priming patterns between real-word and pseudoword primes are best explained by the connectionist models of tone-word recognition, which assume a hierarchical representation of lexical tone.

4.
PLoS One ; 18(8): e0289062, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37549154

RESUMO

We attempted to replicate a potential tinnitus biomarker in humans based on the Sensory Precision Integrative Model of Tinnitus called the Intensity Mismatch Asymmetry. A few advances on the design were also included, including tighter matching of participants for gender, and a control stimulus frequency of 1 kHz to investigate whether any differences between control and tinnitus groups are specific to the tinnitus frequency or domain-general. The expectation was that there would be asymmetry in the MMN responses between tinnitus and control groups at the tinnitus frequency, but not at the control frequency, where the tinnitus group would have larger, more negative responses to upward deviants than downward deviants, and the control group would have the opposite pattern or lack of a deviant direction effect. However, no significant group differences were found. There was a striking difference in response amplitude to control frequency stimuli compared to tinnitus frequency stimuli, which could be an intrinsic quality of responses to these frequencies or could reflect high frequency hearing loss in the sample. Additionally, the upward deviants elicited stronger MMN responses in both groups at tinnitus frequency, but not at the control frequency. Factors contributing to these discrepant results at the tinnitus frequency could include hyperacusis, attention, and wider contextual effects of other frequencies used in the experiment (i.e. the control frequency in other blocks).


Assuntos
Potenciais Evocados Auditivos , Zumbido , Humanos , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Zumbido/diagnóstico , Atenção/fisiologia
5.
Alzheimers Res Ther ; 14(1): 109, 2022 08 05.
Artigo em Inglês | MEDLINE | ID: mdl-35932060

RESUMO

INTRODUCTION: The differentiation of Lewy body dementia from other common dementia types clinically is difficult, with a considerable number of cases only being found post-mortem. Consequently, there is a clear need for inexpensive and accurate diagnostic approaches for clinical use. Electroencephalography (EEG) is one potential candidate due to its relatively low cost and non-invasive nature. Previous studies examining the use of EEG as a dementia diagnostic have focussed on the eyes closed (EC) resting state; however, eyes open (EO) EEG may also be a useful adjunct to quantitative analysis due to clinical availability. METHODS: We extracted spectral properties from EEG signals recorded under research study protocols (1024 Hz sampling rate, 10:5 EEG layout). The data stems from a total of 40 dementia patients with an average age of 74.42, 75.81 and 73.88 years for Alzheimer's disease (AD), dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD), respectively, and 15 healthy controls (HC) with an average age of 76.93 years. We utilised k-nearest neighbour, support vector machine and logistic regression machine learning to differentiate between groups utilising spectral data from the delta, theta, high theta, alpha and beta EEG bands. RESULTS: We found that the combination of EC and EO resting state EEG data significantly increased inter-group classification accuracy compared to methods not using EO data. Secondly, we observed a distinct increase in the dominant frequency variance for HC between the EO and EC state, which was not observed within any dementia subgroup. For inter-group classification, we achieved a specificity of 0.87 and sensitivity of 0.92 for HC vs dementia classification and 0.75 specificity and 0.91 sensitivity for AD vs DLB classification, with a k-nearest neighbour machine learning model which outperformed other machine learning methods. CONCLUSIONS: The findings of our study indicate that the combination of both EC and EO quantitative EEG features improves overall classification accuracy when classifying dementia types in older age adults. In addition, we demonstrate that healthy controls display a definite change in dominant frequency variance between the EC and EO state. In future, a validation cohort should be utilised to further solidify these findings.


Assuntos
Doença de Alzheimer , Demência , Doença por Corpos de Lewy , Doença de Parkinson , Adulto , Idoso , Doença de Alzheimer/diagnóstico , Demência/diagnóstico , Eletroencefalografia/métodos , Humanos , Doença por Corpos de Lewy/diagnóstico
6.
Neuroimage ; 54(3): 2267-77, 2011 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-20970510

RESUMO

During auditory perception, we are required to abstract information from complex temporal sequences such as those in music and speech. Here, we investigated how higher-order statistics modulate the neural responses to sound sequences, hypothesizing that these modulations are associated with higher levels of the peri-Sylvian auditory hierarchy. We devised second-order Markov sequences of pure tones with uniform first-order transition probabilities. Participants learned to discriminate these sequences from random ones. Magnetoencephalography was used to identify evoked fields in which second-order transition probabilities were encoded. We show that improbable tones evoked heightened neural responses after 200 ms post-tone onset during exposure at the learning stage or around 150 ms during the subsequent test stage, originating near the right temporoparietal junction. These signal changes reflected higher-order statistical learning, which can contribute to the perception of natural sounds with hierarchical structures. We propose that our results reflect hierarchical predictive representations, which can contribute to the experiences of speech and music.


Assuntos
Percepção Auditiva/fisiologia , Estimulação Acústica , Córtex Auditivo/fisiologia , Aprendizagem por Discriminação/fisiologia , Potenciais Evocados Auditivos/fisiologia , Previsões , Humanos , Aprendizagem/fisiologia , Magnetoencefalografia , Cadeias de Markov , Lobo Parietal/fisiologia , Lobo Temporal/fisiologia
7.
Cogn Emot ; 25(4): 599-611, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-21547763

RESUMO

Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.


Assuntos
Sinais (Psicologia) , Emoções , Riso/psicologia , Percepção da Fala , Estimulação Acústica , Adulto , Nível de Alerta , Feminino , Humanos , Masculino , Testes Neuropsicológicos
8.
Neuroimage ; 53(4): 1264-71, 2010 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-20600991

RESUMO

Laughter is highly relevant for social interaction in human beings and non-human primates. In humans as well as in non-human primates laughter can be induced by tickling. Human laughter, however, has further diversified and encompasses emotional laughter types with various communicative functions, e.g. joyful and taunting laughter. Here, it was evaluated if this evolutionary diversification of ecological functions is associated with distinct cerebral responses underlying laughter perception. Functional MRI revealed a double-dissociation of cerebral responses during perception of tickling laughter and emotional laughter (joy and taunt) with higher activations in the anterior rostral medial frontal cortex (arMFC) when emotional laughter was perceived, and stronger responses in the right superior temporal gyrus (STG) during appreciation of tickling laughter. Enhanced activation of the arMFC for emotional laughter presumably reflects increasing demands on social cognition processes arising from the greater social salience of these laughter types. Activation increase in the STG for tickling laughter may be linked to the higher acoustic complexity of this laughter type. The observed dissociation of cerebral responses for emotional laughter and tickling laughter was independent of task-directed focusing of attention. These findings support the postulated diversification of human laughter in the course of evolution from an unequivocal play signal to laughter with distinct emotional contents subserving complex social functions.


Assuntos
Percepção Auditiva/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Riso/fisiologia , Adulto , Encéfalo/anatomia & histologia , Emoções/fisiologia , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino
9.
Elife ; 92020 02 12.
Artigo em Inglês | MEDLINE | ID: mdl-32048994

RESUMO

MRI experiments have revealed how throat singers from Tuva produce their characteristic sound.


Assuntos
Canto , Faringe , Som , Acústica da Fala
10.
Emotion ; 9(3): 397-405, 2009 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-19485617

RESUMO

Although laughter is important in human social interaction, its role as a communicative signal is poorly understood. Because laughter is expressed in various emotional contexts, the question arises as to whether different emotions are communicated. In the present study, participants had to appraise 4 types of laughter sounds (joy, tickling, taunting, schadenfreude) either by classifying them according to the underlying emotion or by rating them according to different emotional dimensions. The authors found that emotions in laughter (a) can be classified into different emotional categories, and (b) can have distinctive profiles on W. Wundt's (1905) emotional dimensions. This shows that laughter is a multifaceted social behavior that can adopt various emotional connotations. The findings support the postulated function of laughter in establishing group structure, whereby laughter is used either to include or to exclude individuals from group coherence.


Assuntos
Emoções/fisiologia , Riso/fisiologia , Comportamento Social , Humanos , Relações Interpessoais , Comunicação não Verbal
11.
Cereb Cortex ; 18(3): 541-52, 2008 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17591598

RESUMO

Speech contains prosodic cues such as pauses between different phrases of a sentence. These intonational phrase boundaries (IPBs) elicit a specific component in event-related brain potential studies, the so-called closure positive shift. The aim of the present functional magnetic resonance imaging study is to identify the neural correlates of this prosody-related component in sentences containing segmental and prosodic information (natural speech) and hummed sentences only containing prosodic information. Sentences with 2 IPBs both in normal and hummed speech activated the middle superior temporal gyrus, the rolandic operculum, and the gyrus of Heschl more strongly than sentences with 1 IPB. The results from a region of interest analysis of auditory cortex and auditory association areas suggest that the posterior rolandic operculum, in particular, supports the processing of prosodic information. A comparison of natural speech and hummed sentences revealed a number of left-hemispheric areas within the temporal lobe as well as in the frontal and parietal lobe that were activated more strongly for natural speech than for hummed sentences. These areas constitute the neural network for the processing of natural speech. The finding that no area was activated more strongly for hummed sentences compared with natural speech suggests that prosody is an integrated part of natural speech.


Assuntos
Estimulação Acústica/métodos , Imageamento por Ressonância Magnética/métodos , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Córtex Auditivo/fisiologia , Sinais (Psicologia) , Feminino , Humanos , Masculino
12.
J Acoust Soc Am ; 126(1): 354-66, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19603892

RESUMO

Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech.


Assuntos
Emoções , Riso , Acústica , Análise de Variância , Feminino , Felicidade , Humanos , Masculino , Fonética , Espectrografia do Som , Adulto Jovem
13.
Elife ; 82019 04 08.
Artigo em Inglês | MEDLINE | ID: mdl-30958267

RESUMO

What determines how we move in the world? Motor neuroscience often focusses either on intrinsic rhythmical properties of motor circuits or extrinsic sensorimotor feedback loops. Here we show that the interplay of both intrinsic and extrinsic dynamics is required to explain the intermittency observed in continuous tracking movements. Using spatiotemporal perturbations in humans, we demonstrate that apparently discrete submovements made 2-3 times per second reflect constructive interference between motor errors and continuous feedback corrections that are filtered by intrinsic circuitry in the motor system. Local field potentials in monkey motor cortex revealed characteristic signatures of a Kalman filter, giving rise to both low-frequency cortical cycles during movement, and delta oscillations during sleep. We interpret these results within the framework of optimal feedback control, and suggest that the intrinsic rhythmicity of motor cortical networks reflects an internal model of external dynamics, which is used for state estimation during feedback-guided movement. Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter).


Assuntos
Atividade Motora , Córtex Motor/fisiologia , Movimento , Rede Nervosa/fisiologia , Adulto , Animais , Feminino , Humanos , Macaca mulatta , Masculino , Modelos Neurológicos , Adulto Jovem
14.
Front Psychol ; 10: 681, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30984081

RESUMO

Two outstanding questions in spoken-language comprehension concern (1) the interplay of phonological grammar (legal vs. illegal sound sequences), phonotactic frequency (high- vs. low-frequency sound sequences) and lexicality (words vs. other sound sequences) in a meaningful context, and (2) how the properties of phonological sequences determine their inclusion or exclusion from lexical-semantic processing. In the present study, we used a picture-sound priming paradigm to examine the ERP responses of adult listeners to grammatically illegal sound sequences, to grammatically legal sound sequences (pseudowords) with low- vs. high-frequency, and to real words that were either congruent or incongruent to the picture context. Results showed less negative N1-P2 responses for illegal sequences and low-frequency pseudowords (with differences in topography), but not high-frequency ones. Low-frequency pseudowords also showed an increased P3 component. However, just like illegal sequences, neither low- nor high-frequency pseudowords differed from congruent words in the N400. Thus, phonotactic frequency had an impact before, but not during lexical-semantic processing. Our results also suggest that phonological grammar, phonotactic frequency and lexicality may follow each other in this order during word processing.

15.
Brain Res ; 1220: 179-90, 2008 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-18096139

RESUMO

In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Análise de Variância , Córtex Auditivo/irrigação sanguínea , Vias Auditivas/irrigação sanguínea , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Oxigênio/sangue , Psicolinguística , Fala/fisiologia
16.
Schizophr Bull ; 34(5): 962-73, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18544550

RESUMO

Cognitive disruption in schizophrenia is associated with altered patterns of spatiotemporal interaction associated with multiple electroencephalogram (EEG) frequency bands in cortex. In particular, changes in the generation of gamma (30-80 Hz) and beta2 (20-29 Hz) rhythms correlate with observed deficits in communication between different cortical areas. Aspects of these changes can be reproduced in animal models, most notably those involving acute or chronic reduction in glutamatergic synaptic communication mediated by N-methyl D-aspartate (NMDA) receptors. In vitro electrophysiological and immunocytochemical approaches afforded by such animal models continue to reveal a great deal about the mechanisms underlying EEG rhythm generation and are beginning to uncover which basic molecular, cellular, and network phenomena may underlie their disruption in schizophrenia. Here we briefly review the evidence for changes in gamma-aminobutyric acidergic (GABAergic) and glutamatergic function and address the problem of region specificity of changes with quantitative comparisons of effects of ketamine on gamma and beta2 rhythms in vitro. We conclude, from available evidence, that many observed changes in markers for GABAergic function in schizophrenia may be secondary to deficits in NMDA receptor-mediated excitatory synaptic activity. Furthermore, the broad range of changes in cortical dynamics seen in schizophrenia -- with contrasting effects seen in different brain regions and for different frequency bands -- may be more directly attributable to underlying deficits in glutamatergic neuronal communication rather than GABAergic inhibition alone.


Assuntos
Eletroencefalografia , Receptores de N-Metil-D-Aspartato/fisiologia , Esquizofrenia/diagnóstico , Esquizofrenia/fisiopatologia , Humanos , Receptores de GABA-A/fisiologia , Transdução de Sinais
17.
Brain Lang ; 104(2): 159-69, 2008 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17428526

RESUMO

The current study on German investigates Event-Related brain Potentials (ERPs) for the perception of sentences with intonations which are infrequent (i.e. vocatives) or inadequate in daily conversation. These ERPs are compared to the processing correlates for sentences in which the syntax-to-prosody relations are congruent and used frequently during communication. Results show that perceiving an adequate but infrequent prosodic structure does not result in the same brain responses as encountering an inadequate prosodic pattern. While an early negative-going ERP followed by an N400 were observed for both the infrequent and the inadequate syntax-to-prosody association, only the inadequate intonation also elicits a P600.


Assuntos
Compreensão/fisiologia , Potenciais Evocados Auditivos/fisiologia , Percepção da Fala/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Alemanha , Humanos , Masculino , Psicolinguística
18.
Front Psychol ; 9: 737, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29867690

RESUMO

Music and speech both communicate emotional meanings in addition to their domain-specific contents. But it is not clear whether and how the two kinds of emotional meanings are linked. The present study is focused on exploring the emotional connotations of musical timbre of isolated instrument sounds through the perspective of emotional speech prosody. The stimuli were isolated instrument sounds and emotional speech prosody categorized by listeners into anger, happiness and sadness, respectively. We first analyzed the timbral features of the stimuli, which showed that relations between the three emotions were relatively consistent in those features for speech and music. The results further echo the size-code hypothesis in which different sound timbre indicates different body size projections. Then we conducted an ERP experiment using a priming paradigm with isolated instrument sounds as primes and emotional speech prosody as targets. The results showed that emotionally incongruent instrument-speech pairs triggered a larger N400 response than emotionally congruent pairs. Taken together, this is the first study to provide evidence that the timbre of simple and isolated musical instrument sounds can convey emotion in a way similar to emotional speech prosody.

19.
J Neurosci ; 26(34): 8647-52, 2006 Aug 23.
Artigo em Inglês | MEDLINE | ID: mdl-16928852

RESUMO

In the auditory modality, music and speech have high informational and emotional value for human beings. However, the degree of the functional specialization of the cortical and subcortical areas in encoding music and speech sounds is not yet known. We investigated the functional specialization of the human auditory system in processing music and speech by functional magnetic resonance imaging recordings. During recordings, the subjects were presented with saxophone sounds and pseudowords /ba:ba/ with comparable acoustical content. Our data show that areas encoding music and speech sounds differ in the temporal and frontal lobes. Moreover, slight variations in sound pitch and duration activated thalamic structures differentially. However, this was the case with speech sounds only while no such effect was evidenced with music sounds. Thus, our data reveal the existence of a functional specialization of the human brain in accurately representing sound information at both cortical and subcortical areas. They indicate that not only the sound category (speech/music) but also the sound parameter (pitch/duration) can be selectively encoded.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Imageamento por Ressonância Magnética , Música , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Adulto , Córtex Auditivo/fisiologia , Discriminação Psicológica , Feminino , Humanos , Masculino , Fonética , Percepção da Altura Sonora/fisiologia , Tálamo/fisiologia , Percepção do Tempo/fisiologia
20.
Cognition ; 104(3): 565-90, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-16989798

RESUMO

Several recent studies have shown that focus structural representations influence syntactic processing during reading, while other studies have shown that implicit prosody plays an important role in the understanding of written language. Up until now, the relationship between these two processes has been mostly disregarded. The present study disentangles the roles of focus structure and accent placement in reading by reporting event-related brain potential (ERP) data on the processing of contrastive ellipses. The results reveal a positive-going waveform (350-1300 ms) that correlates with focus structural processing and a negativity (450-650 ms) interpreted as the correlate of implicit prosodic processing. The results suggest that the assignment of focus as well as accent placement are obligatory processes during reading.


Assuntos
Encéfalo/fisiologia , Potenciais Evocados/fisiologia , Idioma , Leitura , Percepção da Fala , Humanos , Semântica , Medida da Produção da Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA