Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Neurosci ; 39(50): 10096-10103, 2019 12 11.
Artículo en Inglés | MEDLINE | ID: mdl-31699888

RESUMEN

We tested the popular, unproven theory that tinnitus is caused by resetting of auditory predictions toward a persistent low-intensity sound. Electroencephalographic mismatch negativity responses, which quantify the violation of sensory predictions, to unattended tinnitus-like sounds were greater in response to upward than downward intensity deviants in 26 unselected chronic tinnitus subjects with normal to severely impaired hearing, and in 15 acute tinnitus subjects, but not in 26 hearing and age-matched controls (p < 0.001, receiver operator characteristic, area under the curve, 0.77), or in 20 healthy and hearing-impaired controls presented with simulated tinnitus. The findings support a prediction resetting model of tinnitus generation, and may form the basis of a convenient tinnitus biomarker, which we name Intensity Mismatch Asymmetry, which is usable across species, is quick and tolerable, and requires no training.SIGNIFICANCE STATEMENT In current models, perception is based around the generation of internal predictions of the environment, which are tested and updated using evidence from the senses. Here, we test the theory that auditory phantom perception (tinnitus) occurs when a default auditory prediction is formed to explain spontaneous activity in the subcortical pathway, rather than ignoring it as noise. We find that chronic tinnitus patients show an abnormal pattern of evoked responses to unexpectedly loud and quiet sounds that both supports this hypothesis and provides fairly accurate classification of tinnitus status at the individual subject level. This approach to objectively demonstrating the predictions underlying pathological perceptual states may also have a much wider utility, for instance, in chronic pain.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Pérdida Auditiva/fisiopatología , Acúfeno/fisiopatología , Estimulación Acústica , Adulto , Anciano , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad
2.
Hum Brain Mapp ; 36(2): 643-54, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25307551

RESUMEN

A major assumption of brain-machine interface research is that patients with disconnected neural pathways can still volitionally recall precise motor commands that could be decoded for naturalistic prosthetic control. However, the disconnected condition of these patients also blocks kinaesthetic feedback from the periphery, which has been shown to regulate centrally generated output responsible for accurate motor control. Here, we tested how well motor commands are generated in the absence of kinaesthetic feedback by decoding hand movements from human scalp electroencephalography in three conditions: unimpaired movement, imagined movement, and movement attempted during temporary disconnection of peripheral afferent and efferent nerves by ischemic nerve block. Our results suggest that the recall of cortical motor commands is impoverished in the absence of kinaesthetic feedback, challenging the possibility of precise naturalistic cortical prosthetic control.


Asunto(s)
Encéfalo/fisiología , Retroalimentación Sensorial/fisiología , Actividad Motora/fisiología , Muñeca/fisiología , Electroencefalografía , Humanos , Imaginación/fisiología , Isquemia , Masculino , Bloqueo Nervioso , Procesamiento de Señales Asistido por Computador
3.
Q J Exp Psychol (Hove) ; 77(5): 1125-1135, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-37710360

RESUMEN

In a form priming experiment with a lexical decision task, we investigated whether the representational structure of lexical tone in lexical memory impacts spoken-word recognition in Mandarin. Target monosyllabic words were preceded by five types of primes: (1) the same real words (/lun4/-/lun4/), (2) real words with only tone contrasts (/lun2/-/lun4/), (3) unrelated real words (/pie3/-/lun4/), (4) pseudowords with only tone contrasts (*/lun3/-/lun4/), and (5) unrelated pseudowords (*/tai3/-/lun4/). We found a facilitation effect in target words with pseudoword primes that share the segmental syllable but contrast in tones (*/lun3/-/lun4/). Moreover, no evident form priming effect was observed in target words primed by real words with only tone contrasts (/lun2/-/lun4/). These results suggest that the recognition of a tone word is influenced by the representational level of tone accessed by the prime word. The distinctive priming patterns between real-word and pseudoword primes are best explained by the connectionist models of tone-word recognition, which assume a hierarchical representation of lexical tone.

4.
PLoS One ; 18(8): e0289062, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37549154

RESUMEN

We attempted to replicate a potential tinnitus biomarker in humans based on the Sensory Precision Integrative Model of Tinnitus called the Intensity Mismatch Asymmetry. A few advances on the design were also included, including tighter matching of participants for gender, and a control stimulus frequency of 1 kHz to investigate whether any differences between control and tinnitus groups are specific to the tinnitus frequency or domain-general. The expectation was that there would be asymmetry in the MMN responses between tinnitus and control groups at the tinnitus frequency, but not at the control frequency, where the tinnitus group would have larger, more negative responses to upward deviants than downward deviants, and the control group would have the opposite pattern or lack of a deviant direction effect. However, no significant group differences were found. There was a striking difference in response amplitude to control frequency stimuli compared to tinnitus frequency stimuli, which could be an intrinsic quality of responses to these frequencies or could reflect high frequency hearing loss in the sample. Additionally, the upward deviants elicited stronger MMN responses in both groups at tinnitus frequency, but not at the control frequency. Factors contributing to these discrepant results at the tinnitus frequency could include hyperacusis, attention, and wider contextual effects of other frequencies used in the experiment (i.e. the control frequency in other blocks).


Asunto(s)
Potenciales Evocados Auditivos , Acúfeno , Humanos , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , Electroencefalografía/métodos , Acúfeno/diagnóstico , Atención/fisiología
5.
Alzheimers Res Ther ; 14(1): 109, 2022 08 05.
Artículo en Inglés | MEDLINE | ID: mdl-35932060

RESUMEN

INTRODUCTION: The differentiation of Lewy body dementia from other common dementia types clinically is difficult, with a considerable number of cases only being found post-mortem. Consequently, there is a clear need for inexpensive and accurate diagnostic approaches for clinical use. Electroencephalography (EEG) is one potential candidate due to its relatively low cost and non-invasive nature. Previous studies examining the use of EEG as a dementia diagnostic have focussed on the eyes closed (EC) resting state; however, eyes open (EO) EEG may also be a useful adjunct to quantitative analysis due to clinical availability. METHODS: We extracted spectral properties from EEG signals recorded under research study protocols (1024 Hz sampling rate, 10:5 EEG layout). The data stems from a total of 40 dementia patients with an average age of 74.42, 75.81 and 73.88 years for Alzheimer's disease (AD), dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD), respectively, and 15 healthy controls (HC) with an average age of 76.93 years. We utilised k-nearest neighbour, support vector machine and logistic regression machine learning to differentiate between groups utilising spectral data from the delta, theta, high theta, alpha and beta EEG bands. RESULTS: We found that the combination of EC and EO resting state EEG data significantly increased inter-group classification accuracy compared to methods not using EO data. Secondly, we observed a distinct increase in the dominant frequency variance for HC between the EO and EC state, which was not observed within any dementia subgroup. For inter-group classification, we achieved a specificity of 0.87 and sensitivity of 0.92 for HC vs dementia classification and 0.75 specificity and 0.91 sensitivity for AD vs DLB classification, with a k-nearest neighbour machine learning model which outperformed other machine learning methods. CONCLUSIONS: The findings of our study indicate that the combination of both EC and EO quantitative EEG features improves overall classification accuracy when classifying dementia types in older age adults. In addition, we demonstrate that healthy controls display a definite change in dominant frequency variance between the EC and EO state. In future, a validation cohort should be utilised to further solidify these findings.


Asunto(s)
Enfermedad de Alzheimer , Demencia , Enfermedad por Cuerpos de Lewy , Enfermedad de Parkinson , Adulto , Anciano , Enfermedad de Alzheimer/diagnóstico , Demencia/diagnóstico , Electroencefalografía/métodos , Humanos , Enfermedad por Cuerpos de Lewy/diagnóstico
6.
Neuroimage ; 54(3): 2267-77, 2011 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-20970510

RESUMEN

During auditory perception, we are required to abstract information from complex temporal sequences such as those in music and speech. Here, we investigated how higher-order statistics modulate the neural responses to sound sequences, hypothesizing that these modulations are associated with higher levels of the peri-Sylvian auditory hierarchy. We devised second-order Markov sequences of pure tones with uniform first-order transition probabilities. Participants learned to discriminate these sequences from random ones. Magnetoencephalography was used to identify evoked fields in which second-order transition probabilities were encoded. We show that improbable tones evoked heightened neural responses after 200 ms post-tone onset during exposure at the learning stage or around 150 ms during the subsequent test stage, originating near the right temporoparietal junction. These signal changes reflected higher-order statistical learning, which can contribute to the perception of natural sounds with hierarchical structures. We propose that our results reflect hierarchical predictive representations, which can contribute to the experiences of speech and music.


Asunto(s)
Percepción Auditiva/fisiología , Estimulación Acústica , Corteza Auditiva/fisiología , Aprendizaje Discriminativo/fisiología , Potenciales Evocados Auditivos/fisiología , Predicción , Humanos , Aprendizaje/fisiología , Magnetoencefalografía , Cadenas de Markov , Lóbulo Parietal/fisiología , Lóbulo Temporal/fisiología
7.
Cogn Emot ; 25(4): 599-611, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21547763

RESUMEN

Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.


Asunto(s)
Señales (Psicología) , Emociones , Risa/psicología , Percepción del Habla , Estimulación Acústica , Adulto , Nivel de Alerta , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas
8.
Neuroimage ; 53(4): 1264-71, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-20600991

RESUMEN

Laughter is highly relevant for social interaction in human beings and non-human primates. In humans as well as in non-human primates laughter can be induced by tickling. Human laughter, however, has further diversified and encompasses emotional laughter types with various communicative functions, e.g. joyful and taunting laughter. Here, it was evaluated if this evolutionary diversification of ecological functions is associated with distinct cerebral responses underlying laughter perception. Functional MRI revealed a double-dissociation of cerebral responses during perception of tickling laughter and emotional laughter (joy and taunt) with higher activations in the anterior rostral medial frontal cortex (arMFC) when emotional laughter was perceived, and stronger responses in the right superior temporal gyrus (STG) during appreciation of tickling laughter. Enhanced activation of the arMFC for emotional laughter presumably reflects increasing demands on social cognition processes arising from the greater social salience of these laughter types. Activation increase in the STG for tickling laughter may be linked to the higher acoustic complexity of this laughter type. The observed dissociation of cerebral responses for emotional laughter and tickling laughter was independent of task-directed focusing of attention. These findings support the postulated diversification of human laughter in the course of evolution from an unequivocal play signal to laughter with distinct emotional contents subserving complex social functions.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Risa/fisiología , Adulto , Encéfalo/anatomía & histología , Emociones/fisiología , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética , Masculino
9.
Elife ; 92020 02 12.
Artículo en Inglés | MEDLINE | ID: mdl-32048994

RESUMEN

MRI experiments have revealed how throat singers from Tuva produce their characteristic sound.


Asunto(s)
Canto , Faringe , Sonido , Acústica del Lenguaje
10.
Emotion ; 9(3): 397-405, 2009 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-19485617

RESUMEN

Although laughter is important in human social interaction, its role as a communicative signal is poorly understood. Because laughter is expressed in various emotional contexts, the question arises as to whether different emotions are communicated. In the present study, participants had to appraise 4 types of laughter sounds (joy, tickling, taunting, schadenfreude) either by classifying them according to the underlying emotion or by rating them according to different emotional dimensions. The authors found that emotions in laughter (a) can be classified into different emotional categories, and (b) can have distinctive profiles on W. Wundt's (1905) emotional dimensions. This shows that laughter is a multifaceted social behavior that can adopt various emotional connotations. The findings support the postulated function of laughter in establishing group structure, whereby laughter is used either to include or to exclude individuals from group coherence.


Asunto(s)
Emociones/fisiología , Risa/fisiología , Conducta Social , Humanos , Relaciones Interpersonales , Comunicación no Verbal
11.
Cereb Cortex ; 18(3): 541-52, 2008 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-17591598

RESUMEN

Speech contains prosodic cues such as pauses between different phrases of a sentence. These intonational phrase boundaries (IPBs) elicit a specific component in event-related brain potential studies, the so-called closure positive shift. The aim of the present functional magnetic resonance imaging study is to identify the neural correlates of this prosody-related component in sentences containing segmental and prosodic information (natural speech) and hummed sentences only containing prosodic information. Sentences with 2 IPBs both in normal and hummed speech activated the middle superior temporal gyrus, the rolandic operculum, and the gyrus of Heschl more strongly than sentences with 1 IPB. The results from a region of interest analysis of auditory cortex and auditory association areas suggest that the posterior rolandic operculum, in particular, supports the processing of prosodic information. A comparison of natural speech and hummed sentences revealed a number of left-hemispheric areas within the temporal lobe as well as in the frontal and parietal lobe that were activated more strongly for natural speech than for hummed sentences. These areas constitute the neural network for the processing of natural speech. The finding that no area was activated more strongly for hummed sentences compared with natural speech suggests that prosody is an integrated part of natural speech.


Asunto(s)
Estimulación Acústica/métodos , Imagen por Resonancia Magnética/métodos , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Corteza Auditiva/fisiología , Señales (Psicología) , Femenino , Humanos , Masculino
12.
J Acoust Soc Am ; 126(1): 354-66, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19603892

RESUMEN

Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech.


Asunto(s)
Emociones , Risa , Acústica , Análisis de Varianza , Femenino , Felicidad , Humanos , Masculino , Fonética , Espectrografía del Sonido , Adulto Joven
13.
Elife ; 82019 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-30958267

RESUMEN

What determines how we move in the world? Motor neuroscience often focusses either on intrinsic rhythmical properties of motor circuits or extrinsic sensorimotor feedback loops. Here we show that the interplay of both intrinsic and extrinsic dynamics is required to explain the intermittency observed in continuous tracking movements. Using spatiotemporal perturbations in humans, we demonstrate that apparently discrete submovements made 2-3 times per second reflect constructive interference between motor errors and continuous feedback corrections that are filtered by intrinsic circuitry in the motor system. Local field potentials in monkey motor cortex revealed characteristic signatures of a Kalman filter, giving rise to both low-frequency cortical cycles during movement, and delta oscillations during sleep. We interpret these results within the framework of optimal feedback control, and suggest that the intrinsic rhythmicity of motor cortical networks reflects an internal model of external dynamics, which is used for state estimation during feedback-guided movement. Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter).


Asunto(s)
Actividad Motora , Corteza Motora/fisiología , Movimiento , Red Nerviosa/fisiología , Adulto , Animales , Femenino , Humanos , Macaca mulatta , Masculino , Modelos Neurológicos , Adulto Joven
14.
Front Psychol ; 10: 681, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30984081

RESUMEN

Two outstanding questions in spoken-language comprehension concern (1) the interplay of phonological grammar (legal vs. illegal sound sequences), phonotactic frequency (high- vs. low-frequency sound sequences) and lexicality (words vs. other sound sequences) in a meaningful context, and (2) how the properties of phonological sequences determine their inclusion or exclusion from lexical-semantic processing. In the present study, we used a picture-sound priming paradigm to examine the ERP responses of adult listeners to grammatically illegal sound sequences, to grammatically legal sound sequences (pseudowords) with low- vs. high-frequency, and to real words that were either congruent or incongruent to the picture context. Results showed less negative N1-P2 responses for illegal sequences and low-frequency pseudowords (with differences in topography), but not high-frequency ones. Low-frequency pseudowords also showed an increased P3 component. However, just like illegal sequences, neither low- nor high-frequency pseudowords differed from congruent words in the N400. Thus, phonotactic frequency had an impact before, but not during lexical-semantic processing. Our results also suggest that phonological grammar, phonotactic frequency and lexicality may follow each other in this order during word processing.

15.
Brain Res ; 1220: 179-90, 2008 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-18096139

RESUMEN

In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Análisis de Varianza , Corteza Auditiva/irrigación sanguínea , Vías Auditivas/irrigación sanguínea , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética/métodos , Oxígeno/sangre , Psicolingüística , Habla/fisiología
16.
Schizophr Bull ; 34(5): 962-73, 2008 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-18544550

RESUMEN

Cognitive disruption in schizophrenia is associated with altered patterns of spatiotemporal interaction associated with multiple electroencephalogram (EEG) frequency bands in cortex. In particular, changes in the generation of gamma (30-80 Hz) and beta2 (20-29 Hz) rhythms correlate with observed deficits in communication between different cortical areas. Aspects of these changes can be reproduced in animal models, most notably those involving acute or chronic reduction in glutamatergic synaptic communication mediated by N-methyl D-aspartate (NMDA) receptors. In vitro electrophysiological and immunocytochemical approaches afforded by such animal models continue to reveal a great deal about the mechanisms underlying EEG rhythm generation and are beginning to uncover which basic molecular, cellular, and network phenomena may underlie their disruption in schizophrenia. Here we briefly review the evidence for changes in gamma-aminobutyric acidergic (GABAergic) and glutamatergic function and address the problem of region specificity of changes with quantitative comparisons of effects of ketamine on gamma and beta2 rhythms in vitro. We conclude, from available evidence, that many observed changes in markers for GABAergic function in schizophrenia may be secondary to deficits in NMDA receptor-mediated excitatory synaptic activity. Furthermore, the broad range of changes in cortical dynamics seen in schizophrenia -- with contrasting effects seen in different brain regions and for different frequency bands -- may be more directly attributable to underlying deficits in glutamatergic neuronal communication rather than GABAergic inhibition alone.


Asunto(s)
Electroencefalografía , Receptores de N-Metil-D-Aspartato/fisiología , Esquizofrenia/diagnóstico , Esquizofrenia/fisiopatología , Humanos , Receptores de GABA-A/fisiología , Transducción de Señal
17.
Brain Lang ; 104(2): 159-69, 2008 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-17428526

RESUMEN

The current study on German investigates Event-Related brain Potentials (ERPs) for the perception of sentences with intonations which are infrequent (i.e. vocatives) or inadequate in daily conversation. These ERPs are compared to the processing correlates for sentences in which the syntax-to-prosody relations are congruent and used frequently during communication. Results show that perceiving an adequate but infrequent prosodic structure does not result in the same brain responses as encountering an inadequate prosodic pattern. While an early negative-going ERP followed by an N400 were observed for both the infrequent and the inadequate syntax-to-prosody association, only the inadequate intonation also elicits a P600.


Asunto(s)
Comprensión/fisiología , Potenciales Evocados Auditivos/fisiología , Percepción del Habla/fisiología , Adulto , Mapeo Encefálico , Femenino , Alemania , Humanos , Masculino , Psicolingüística
18.
Front Psychol ; 9: 737, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29867690

RESUMEN

Music and speech both communicate emotional meanings in addition to their domain-specific contents. But it is not clear whether and how the two kinds of emotional meanings are linked. The present study is focused on exploring the emotional connotations of musical timbre of isolated instrument sounds through the perspective of emotional speech prosody. The stimuli were isolated instrument sounds and emotional speech prosody categorized by listeners into anger, happiness and sadness, respectively. We first analyzed the timbral features of the stimuli, which showed that relations between the three emotions were relatively consistent in those features for speech and music. The results further echo the size-code hypothesis in which different sound timbre indicates different body size projections. Then we conducted an ERP experiment using a priming paradigm with isolated instrument sounds as primes and emotional speech prosody as targets. The results showed that emotionally incongruent instrument-speech pairs triggered a larger N400 response than emotionally congruent pairs. Taken together, this is the first study to provide evidence that the timbre of simple and isolated musical instrument sounds can convey emotion in a way similar to emotional speech prosody.

19.
J Neurosci ; 26(34): 8647-52, 2006 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-16928852

RESUMEN

In the auditory modality, music and speech have high informational and emotional value for human beings. However, the degree of the functional specialization of the cortical and subcortical areas in encoding music and speech sounds is not yet known. We investigated the functional specialization of the human auditory system in processing music and speech by functional magnetic resonance imaging recordings. During recordings, the subjects were presented with saxophone sounds and pseudowords /ba:ba/ with comparable acoustical content. Our data show that areas encoding music and speech sounds differ in the temporal and frontal lobes. Moreover, slight variations in sound pitch and duration activated thalamic structures differentially. However, this was the case with speech sounds only while no such effect was evidenced with music sounds. Thus, our data reveal the existence of a functional specialization of the human brain in accurately representing sound information at both cortical and subcortical areas. They indicate that not only the sound category (speech/music) but also the sound parameter (pitch/duration) can be selectively encoded.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Imagen por Resonancia Magnética , Música , Red Nerviosa/fisiología , Percepción del Habla/fisiología , Adulto , Corteza Auditiva/fisiología , Discriminación en Psicología , Femenino , Humanos , Masculino , Fonética , Percepción de la Altura Tonal/fisiología , Tálamo/fisiología , Percepción del Tiempo/fisiología
20.
Cognition ; 104(3): 565-90, 2007 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-16989798

RESUMEN

Several recent studies have shown that focus structural representations influence syntactic processing during reading, while other studies have shown that implicit prosody plays an important role in the understanding of written language. Up until now, the relationship between these two processes has been mostly disregarded. The present study disentangles the roles of focus structure and accent placement in reading by reporting event-related brain potential (ERP) data on the processing of contrastive ellipses. The results reveal a positive-going waveform (350-1300 ms) that correlates with focus structural processing and a negativity (450-650 ms) interpreted as the correlate of implicit prosodic processing. The results suggest that the assignment of focus as well as accent placement are obligatory processes during reading.


Asunto(s)
Encéfalo/fisiología , Potenciales Evocados/fisiología , Lenguaje , Lectura , Percepción del Habla , Humanos , Semántica , Medición de la Producción del Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA