Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Q J Exp Psychol (Hove) ; 77(5): 1125-1135, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-37710360

RESUMEN

In a form priming experiment with a lexical decision task, we investigated whether the representational structure of lexical tone in lexical memory impacts spoken-word recognition in Mandarin. Target monosyllabic words were preceded by five types of primes: (1) the same real words (/lun4/-/lun4/), (2) real words with only tone contrasts (/lun2/-/lun4/), (3) unrelated real words (/pie3/-/lun4/), (4) pseudowords with only tone contrasts (*/lun3/-/lun4/), and (5) unrelated pseudowords (*/tai3/-/lun4/). We found a facilitation effect in target words with pseudoword primes that share the segmental syllable but contrast in tones (*/lun3/-/lun4/). Moreover, no evident form priming effect was observed in target words primed by real words with only tone contrasts (/lun2/-/lun4/). These results suggest that the recognition of a tone word is influenced by the representational level of tone accessed by the prime word. The distinctive priming patterns between real-word and pseudoword primes are best explained by the connectionist models of tone-word recognition, which assume a hierarchical representation of lexical tone.

2.
PLoS One ; 18(8): e0289062, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37549154

RESUMEN

We attempted to replicate a potential tinnitus biomarker in humans based on the Sensory Precision Integrative Model of Tinnitus called the Intensity Mismatch Asymmetry. A few advances on the design were also included, including tighter matching of participants for gender, and a control stimulus frequency of 1 kHz to investigate whether any differences between control and tinnitus groups are specific to the tinnitus frequency or domain-general. The expectation was that there would be asymmetry in the MMN responses between tinnitus and control groups at the tinnitus frequency, but not at the control frequency, where the tinnitus group would have larger, more negative responses to upward deviants than downward deviants, and the control group would have the opposite pattern or lack of a deviant direction effect. However, no significant group differences were found. There was a striking difference in response amplitude to control frequency stimuli compared to tinnitus frequency stimuli, which could be an intrinsic quality of responses to these frequencies or could reflect high frequency hearing loss in the sample. Additionally, the upward deviants elicited stronger MMN responses in both groups at tinnitus frequency, but not at the control frequency. Factors contributing to these discrepant results at the tinnitus frequency could include hyperacusis, attention, and wider contextual effects of other frequencies used in the experiment (i.e. the control frequency in other blocks).


Asunto(s)
Potenciales Evocados Auditivos , Acúfeno , Humanos , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , Electroencefalografía/métodos , Acúfeno/diagnóstico , Atención/fisiología
3.
Alzheimers Res Ther ; 14(1): 109, 2022 08 05.
Artículo en Inglés | MEDLINE | ID: mdl-35932060

RESUMEN

INTRODUCTION: The differentiation of Lewy body dementia from other common dementia types clinically is difficult, with a considerable number of cases only being found post-mortem. Consequently, there is a clear need for inexpensive and accurate diagnostic approaches for clinical use. Electroencephalography (EEG) is one potential candidate due to its relatively low cost and non-invasive nature. Previous studies examining the use of EEG as a dementia diagnostic have focussed on the eyes closed (EC) resting state; however, eyes open (EO) EEG may also be a useful adjunct to quantitative analysis due to clinical availability. METHODS: We extracted spectral properties from EEG signals recorded under research study protocols (1024 Hz sampling rate, 10:5 EEG layout). The data stems from a total of 40 dementia patients with an average age of 74.42, 75.81 and 73.88 years for Alzheimer's disease (AD), dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD), respectively, and 15 healthy controls (HC) with an average age of 76.93 years. We utilised k-nearest neighbour, support vector machine and logistic regression machine learning to differentiate between groups utilising spectral data from the delta, theta, high theta, alpha and beta EEG bands. RESULTS: We found that the combination of EC and EO resting state EEG data significantly increased inter-group classification accuracy compared to methods not using EO data. Secondly, we observed a distinct increase in the dominant frequency variance for HC between the EO and EC state, which was not observed within any dementia subgroup. For inter-group classification, we achieved a specificity of 0.87 and sensitivity of 0.92 for HC vs dementia classification and 0.75 specificity and 0.91 sensitivity for AD vs DLB classification, with a k-nearest neighbour machine learning model which outperformed other machine learning methods. CONCLUSIONS: The findings of our study indicate that the combination of both EC and EO quantitative EEG features improves overall classification accuracy when classifying dementia types in older age adults. In addition, we demonstrate that healthy controls display a definite change in dominant frequency variance between the EC and EO state. In future, a validation cohort should be utilised to further solidify these findings.


Asunto(s)
Enfermedad de Alzheimer , Demencia , Enfermedad por Cuerpos de Lewy , Enfermedad de Parkinson , Adulto , Anciano , Enfermedad de Alzheimer/diagnóstico , Demencia/diagnóstico , Electroencefalografía/métodos , Humanos , Enfermedad por Cuerpos de Lewy/diagnóstico
4.
Elife ; 92020 02 12.
Artículo en Inglés | MEDLINE | ID: mdl-32048994

RESUMEN

MRI experiments have revealed how throat singers from Tuva produce their characteristic sound.


Asunto(s)
Canto , Faringe , Sonido , Acústica del Lenguaje
5.
J Neurosci ; 39(50): 10096-10103, 2019 12 11.
Artículo en Inglés | MEDLINE | ID: mdl-31699888

RESUMEN

We tested the popular, unproven theory that tinnitus is caused by resetting of auditory predictions toward a persistent low-intensity sound. Electroencephalographic mismatch negativity responses, which quantify the violation of sensory predictions, to unattended tinnitus-like sounds were greater in response to upward than downward intensity deviants in 26 unselected chronic tinnitus subjects with normal to severely impaired hearing, and in 15 acute tinnitus subjects, but not in 26 hearing and age-matched controls (p < 0.001, receiver operator characteristic, area under the curve, 0.77), or in 20 healthy and hearing-impaired controls presented with simulated tinnitus. The findings support a prediction resetting model of tinnitus generation, and may form the basis of a convenient tinnitus biomarker, which we name Intensity Mismatch Asymmetry, which is usable across species, is quick and tolerable, and requires no training.SIGNIFICANCE STATEMENT In current models, perception is based around the generation of internal predictions of the environment, which are tested and updated using evidence from the senses. Here, we test the theory that auditory phantom perception (tinnitus) occurs when a default auditory prediction is formed to explain spontaneous activity in the subcortical pathway, rather than ignoring it as noise. We find that chronic tinnitus patients show an abnormal pattern of evoked responses to unexpectedly loud and quiet sounds that both supports this hypothesis and provides fairly accurate classification of tinnitus status at the individual subject level. This approach to objectively demonstrating the predictions underlying pathological perceptual states may also have a much wider utility, for instance, in chronic pain.


Asunto(s)
Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Pérdida Auditiva/fisiopatología , Acúfeno/fisiopatología , Estimulación Acústica , Adulto , Anciano , Electroencefalografía , Femenino , Humanos , Masculino , Persona de Mediana Edad
6.
Elife ; 82019 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-30958267

RESUMEN

What determines how we move in the world? Motor neuroscience often focusses either on intrinsic rhythmical properties of motor circuits or extrinsic sensorimotor feedback loops. Here we show that the interplay of both intrinsic and extrinsic dynamics is required to explain the intermittency observed in continuous tracking movements. Using spatiotemporal perturbations in humans, we demonstrate that apparently discrete submovements made 2-3 times per second reflect constructive interference between motor errors and continuous feedback corrections that are filtered by intrinsic circuitry in the motor system. Local field potentials in monkey motor cortex revealed characteristic signatures of a Kalman filter, giving rise to both low-frequency cortical cycles during movement, and delta oscillations during sleep. We interpret these results within the framework of optimal feedback control, and suggest that the intrinsic rhythmicity of motor cortical networks reflects an internal model of external dynamics, which is used for state estimation during feedback-guided movement. Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter).


Asunto(s)
Actividad Motora , Corteza Motora/fisiología , Movimiento , Red Nerviosa/fisiología , Adulto , Animales , Femenino , Humanos , Macaca mulatta , Masculino , Modelos Neurológicos , Adulto Joven
7.
Front Psychol ; 10: 681, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30984081

RESUMEN

Two outstanding questions in spoken-language comprehension concern (1) the interplay of phonological grammar (legal vs. illegal sound sequences), phonotactic frequency (high- vs. low-frequency sound sequences) and lexicality (words vs. other sound sequences) in a meaningful context, and (2) how the properties of phonological sequences determine their inclusion or exclusion from lexical-semantic processing. In the present study, we used a picture-sound priming paradigm to examine the ERP responses of adult listeners to grammatically illegal sound sequences, to grammatically legal sound sequences (pseudowords) with low- vs. high-frequency, and to real words that were either congruent or incongruent to the picture context. Results showed less negative N1-P2 responses for illegal sequences and low-frequency pseudowords (with differences in topography), but not high-frequency ones. Low-frequency pseudowords also showed an increased P3 component. However, just like illegal sequences, neither low- nor high-frequency pseudowords differed from congruent words in the N400. Thus, phonotactic frequency had an impact before, but not during lexical-semantic processing. Our results also suggest that phonological grammar, phonotactic frequency and lexicality may follow each other in this order during word processing.

8.
Front Psychol ; 9: 737, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29867690

RESUMEN

Music and speech both communicate emotional meanings in addition to their domain-specific contents. But it is not clear whether and how the two kinds of emotional meanings are linked. The present study is focused on exploring the emotional connotations of musical timbre of isolated instrument sounds through the perspective of emotional speech prosody. The stimuli were isolated instrument sounds and emotional speech prosody categorized by listeners into anger, happiness and sadness, respectively. We first analyzed the timbral features of the stimuli, which showed that relations between the three emotions were relatively consistent in those features for speech and music. The results further echo the size-code hypothesis in which different sound timbre indicates different body size projections. Then we conducted an ERP experiment using a priming paradigm with isolated instrument sounds as primes and emotional speech prosody as targets. The results showed that emotionally incongruent instrument-speech pairs triggered a larger N400 response than emotionally congruent pairs. Taken together, this is the first study to provide evidence that the timbre of simple and isolated musical instrument sounds can convey emotion in a way similar to emotional speech prosody.

9.
Brain Lang ; 148: 74-80, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25529405

RESUMEN

Electroencephalography (EEG) has identified human brain potentials elicited by Artificial Grammar (AG) learning paradigms, which present participants with rule-based sequences of stimuli. Nonhuman animals are sensitive to certain AGs; therefore, evaluating which EEG Event Related Potentials (ERPs) are associated with AG learning in nonhuman animals could identify evolutionarily conserved processes. We recorded EEG potentials during an auditory AG learning experiment in two Rhesus macaques. The animals were first exposed to sequences of nonsense words generated by the AG. Then surface-based ERPs were recorded in response to sequences that were 'consistent' with the AG and 'violation' sequences containing illegal transitions. The AG violations strongly modulated an early component, potentially homologous to the Mismatch Negativity (mMMN), a P200 and a late frontal positivity (P500). The macaque P500 is similar in polarity and time of occurrence to a late EEG positivity reported in human AG learning studies but might differ in functional role.


Asunto(s)
Encéfalo/fisiología , Electroencefalografía , Potenciales Evocados/fisiología , Aprendizaje/fisiología , Lingüística , Macaca mulatta/fisiología , Estimulación Acústica , Animales , Mapeo Encefálico , Femenino , Humanos , Masculino , Percepción del Habla/fisiología
10.
Hum Brain Mapp ; 36(2): 643-54, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25307551

RESUMEN

A major assumption of brain-machine interface research is that patients with disconnected neural pathways can still volitionally recall precise motor commands that could be decoded for naturalistic prosthetic control. However, the disconnected condition of these patients also blocks kinaesthetic feedback from the periphery, which has been shown to regulate centrally generated output responsible for accurate motor control. Here, we tested how well motor commands are generated in the absence of kinaesthetic feedback by decoding hand movements from human scalp electroencephalography in three conditions: unimpaired movement, imagined movement, and movement attempted during temporary disconnection of peripheral afferent and efferent nerves by ischemic nerve block. Our results suggest that the recall of cortical motor commands is impoverished in the absence of kinaesthetic feedback, challenging the possibility of precise naturalistic cortical prosthetic control.


Asunto(s)
Encéfalo/fisiología , Retroalimentación Sensorial/fisiología , Actividad Motora/fisiología , Muñeca/fisiología , Electroencefalografía , Humanos , Imaginación/fisiología , Isquemia , Masculino , Bloqueo Nervioso , Procesamiento de Señales Asistido por Computador
11.
Brain Lang ; 139: 10-22, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25463813

RESUMEN

Although several experiments reported rapid cortical plasticity induced by passive exposure to novel segmental patterns, few studies have devoted attention to the neural dynamics during the rapid learning of novel tonal word-forms in tonal languages, such as Chinese. In the current study, native speakers of Mandarin Chinese were exposed to acoustically matched real and novel segment-tone patterns. By recording their Mismatch Negativity (MMN) responses (an ERP indicator of long-term memory traces for spoken words), we found enhanced MMNs to the novel word-forms over the left-hemispheric region in the late exposure phase relative to the early exposure phase. In contrast, no significant changes were identified in MMN responses to the real word during familiarisation. Our results suggest a rapid Hebbian learning mechanism in the human neocortex which develops long-term memory traces for a novel segment-tone pattern by establishing new associations between the segmental and tonal representations.


Asunto(s)
Lenguaje , Aprendizaje/fisiología , Neocórtex/fisiología , Adulto , China , Potenciales Evocados/fisiología , Femenino , Humanos , Memoria a Largo Plazo/fisiología , Habla/fisiología
12.
Brain Lang ; 136: 19-30, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-25113242

RESUMEN

This study investigates the influence of rhythmic expectancies on language processing. It is assumed that language rhythm involves an alternation of strong and weak beats within a linguistic domain. Hence, in some contexts rhythmically induced stress shifts occur in order to comply with the Rhythm Rule. In English, this rule operates to prevent clashes of stressed adjacent syllables or lapses of adjacent unstressed syllables. While previous studies investigated effects on speech production and perception, this study focuses on brain responses to structures either obeying or deviating from this rule. Event-related potentials show that rhythmic regularity is relevant for language processing: rhythmic deviations evoked different ERP components reflecting the deviance from rhythmic expectancies. An N400 effect found for shifted items reflects higher costs in lexical processing due to stress deviation. The overall results disentangle lexical and rhythmical influences on language processing and complement the findings of previous studies on rhythmical processing.


Asunto(s)
Potenciales Evocados/fisiología , Fonética , Semántica , Acústica del Lenguaje , Percepción del Habla/fisiología , Adulto , Encéfalo/fisiología , Electroencefalografía/métodos , Femenino , Humanos , Lenguaje , Masculino , Periodicidad , Adulto Joven
13.
PLoS One ; 8(5): e63441, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23667619

RESUMEN

Laughter is an ancient signal of social communication among humans and non-human primates. Laughter types with complex social functions (e.g., taunt and joy) presumably evolved from the unequivocal and reflex-like social bonding signal of tickling laughter already present in non-human primates. Here, we investigated the modulations of cerebral connectivity associated with different laughter types as well as the effects of attention shifts between implicit and explicit processing of social information conveyed by laughter using functional magnetic resonance imaging (fMRI). Complex social laughter types and tickling laughter were found to modulate connectivity in two distinguishable but partially overlapping parts of the laughter perception network irrespective of task instructions. Connectivity changes, presumably related to the higher acoustic complexity of tickling laughter, occurred between areas in the prefrontal cortex and the auditory association cortex, potentially reflecting higher demands on acoustic analysis associated with increased information load on auditory attention, working memory, evaluation and response selection processes. In contrast, the higher degree of socio-relational information in complex social laughter types was linked to increases of connectivity between auditory association cortices, the right dorsolateral prefrontal cortex and brain areas associated with mentalizing as well as areas in the visual associative cortex. These modulations might reflect automatic analysis of acoustic features, attention direction to informative aspects of the laughter signal and the retention of those in working memory during evaluation processes. These processes may be associated with visual imagery supporting the formation of inferences on the intentions of our social counterparts. Here, the right dorsolateral precentral cortex appears as a network node potentially linking the functions of auditory and visual associative sensory cortices with those of the mentalizing-associated anterior mediofrontal cortex during the decoding of social information in laughter.


Asunto(s)
Corteza Auditiva/metabolismo , Risa/fisiología , Procesos Mentales/fisiología , Red Nerviosa/fisiología , Corteza Prefrontal/fisiología , Conducta Social , Estimulación Acústica , Adulto , Mapeo Encefálico , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética
14.
Brain Lang ; 121(3): 267-72, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-22480626

RESUMEN

The phonological trace of perceived words starts fading away in short-term memory after a few seconds. Spoken utterances are usually 2-3s long, possibly to allow the listener to parse the words into coherent prosodic phrases while they still have a clear representation. Results from this brain potential study suggest that even during silent reading, words are organized into 2-3s long 'implicit' prosodic phrases. Participants read the same sentences word by word at different presentation rates. Clause-final words occurring at multiples of 2-3s from sentence onset yielded increased positivity, irrespective of presentation rate. The effect was interpreted as a closure positive shift (CPS), reflecting insertion of implicit prosodic phrase boundaries every 2-3s. Additionally, in participants with low working memory span, clauses over 3s long produced a negativity, possibly indicating increased working memory load.


Asunto(s)
Encéfalo/fisiología , Memoria a Corto Plazo/fisiología , Lectura , Electroencefalografía , Potenciales Evocados , Humanos , Tiempo
16.
Cogn Emot ; 25(4): 599-611, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21547763

RESUMEN

Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.


Asunto(s)
Señales (Psicología) , Emociones , Risa/psicología , Percepción del Habla , Estimulación Acústica , Adulto , Nivel de Alerta , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas
17.
J Voice ; 25(1): 32-7, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-20381307

RESUMEN

Although laughter is an important aspect of nonverbal vocalization, its acoustic properties are still not fully understood. Extreme articulation during laughter production, such as wide jaw opening, suggests that laughter can have very high first formant (F(1)) frequencies. We measured fundamental frequency and formant frequencies of the vowels produced in the vocalic segments of laughter. Vocalic segments showed higher average F(1) frequencies than those previously reported and individual values could be as high as 1100 Hz for male speakers and 1500 Hz for female speakers. To our knowledge, these are the highest F(1) frequencies reported to date for human vocalizations, exceeding even the F(1) frequencies reported for trained soprano singers. These exceptionally high F(1) values are likely to be based on the extreme positions adopted by the vocal tract during laughter in combination with physiological constraints accompanying the production of a "pressed" voice.


Asunto(s)
Laringe/fisiología , Risa , Fonación , Acústica del Lenguaje , Femenino , Humanos , Laringe/anatomía & histología , Masculino , Factores Sexuales , Procesamiento de Señales Asistido por Computador , Espectrografía del Sonido , Factores de Tiempo
18.
Neuroimage ; 54(3): 2267-77, 2011 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-20970510

RESUMEN

During auditory perception, we are required to abstract information from complex temporal sequences such as those in music and speech. Here, we investigated how higher-order statistics modulate the neural responses to sound sequences, hypothesizing that these modulations are associated with higher levels of the peri-Sylvian auditory hierarchy. We devised second-order Markov sequences of pure tones with uniform first-order transition probabilities. Participants learned to discriminate these sequences from random ones. Magnetoencephalography was used to identify evoked fields in which second-order transition probabilities were encoded. We show that improbable tones evoked heightened neural responses after 200 ms post-tone onset during exposure at the learning stage or around 150 ms during the subsequent test stage, originating near the right temporoparietal junction. These signal changes reflected higher-order statistical learning, which can contribute to the perception of natural sounds with hierarchical structures. We propose that our results reflect hierarchical predictive representations, which can contribute to the experiences of speech and music.


Asunto(s)
Percepción Auditiva/fisiología , Estimulación Acústica , Corteza Auditiva/fisiología , Aprendizaje Discriminativo/fisiología , Potenciales Evocados Auditivos/fisiología , Predicción , Humanos , Aprendizaje/fisiología , Magnetoencefalografía , Cadenas de Markov , Lóbulo Parietal/fisiología , Lóbulo Temporal/fisiología
19.
Neuroimage ; 53(4): 1264-71, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-20600991

RESUMEN

Laughter is highly relevant for social interaction in human beings and non-human primates. In humans as well as in non-human primates laughter can be induced by tickling. Human laughter, however, has further diversified and encompasses emotional laughter types with various communicative functions, e.g. joyful and taunting laughter. Here, it was evaluated if this evolutionary diversification of ecological functions is associated with distinct cerebral responses underlying laughter perception. Functional MRI revealed a double-dissociation of cerebral responses during perception of tickling laughter and emotional laughter (joy and taunt) with higher activations in the anterior rostral medial frontal cortex (arMFC) when emotional laughter was perceived, and stronger responses in the right superior temporal gyrus (STG) during appreciation of tickling laughter. Enhanced activation of the arMFC for emotional laughter presumably reflects increasing demands on social cognition processes arising from the greater social salience of these laughter types. Activation increase in the STG for tickling laughter may be linked to the higher acoustic complexity of this laughter type. The observed dissociation of cerebral responses for emotional laughter and tickling laughter was independent of task-directed focusing of attention. These findings support the postulated diversification of human laughter in the course of evolution from an unequivocal play signal to laughter with distinct emotional contents subserving complex social functions.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Risa/fisiología , Adulto , Encéfalo/anatomía & histología , Emociones/fisiología , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Imagen por Resonancia Magnética , Masculino
20.
J Acoust Soc Am ; 126(1): 354-66, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19603892

RESUMEN

Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech.


Asunto(s)
Emociones , Risa , Acústica , Análisis de Varianza , Femenino , Felicidad , Humanos , Masculino , Fonética , Espectrografía del Sonido , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA