Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros

Métodos Terapéuticos y Terapias MTCI
Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
2.
Brain Cogn ; 166: 105954, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36657242

RESUMEN

This study aimed to investigate the psychophysiological markers of imagery processes through EEG/ERP recordings. Visual and auditory stimuli representing 10 different semantic categories were shown to 30 healthy participants. After a given interval and prompted by a light signal, participants were asked to activate a mental image corresponding to the semantic category for recording synchronized electrical potentials. Unprecedented electrophysiological markers of imagination were recorded in the absence of sensory stimulation. The following peaks were identified at specific scalp sites and latencies, during imagination of infants (centroparietal positivity, CPP, and late CPP), human faces (anterior negativity, AN), animals (anterior positivity, AP), music (P300-like), speech (N400-like), affective vocalizations (P2-like) and sensory (visual vs auditory) modality (PN300). Overall, perception and imagery conditions shared some common electro/cortical markers, but during imagery the category-dependent modulation of ERPs was long latency and more anterior, with respect to the perceptual condition. These ERP markers might be precious tools for BCI systems (pattern recognition, classification, or A.I. algorithms) applied to patients affected by consciousness disorders (e.g., in a vegetative or comatose state) or locked-in-patients (e.g., spinal or SLA patients).


Asunto(s)
Electroencefalografía , Potenciales Evocados , Animales , Humanos , Masculino , Femenino , Imaginación/fisiología , Percepción Auditiva
3.
Front Behav Neurosci ; 16: 1025870, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36523756

RESUMEN

Objective: A majority of BCI systems, enabling communication with patients with locked-in syndrome, are based on electroencephalogram (EEG) frequency analysis (e.g., linked to motor imagery) or P300 detection. Only recently, the use of event-related brain potentials (ERPs) has received much attention, especially for face or music recognition, but neuro-engineering research into this new approach has not been carried out yet. The aim of this study was to provide a variety of reliable ERP markers of visual and auditory perception for the development of new and more complex mind-reading systems for reconstructing the mental content from brain activity. Methods: A total of 30 participants were shown 280 color pictures (adult, infant, and animal faces; human bodies; written words; checkerboards; and objects) and 120 auditory files (speech, music, and affective vocalizations). This paradigm did not involve target selection to avoid artifactual waves linked to decision-making and response preparation (e.g., P300 and motor potentials), masking the neural signature of semantic representation. Overall, 12,000 ERP waveforms × 126 electrode channels (1 million 512,000 ERP waveforms) were processed and artifact-rejected. Results: Clear and distinct category-dependent markers of perceptual and cognitive processing were identified through statistical analyses, some of which were novel to the literature. Results are discussed from the view of current knowledge of ERP functional properties and with respect to machine learning classification methods previously applied to similar data. Conclusion: The data showed a high level of accuracy (p ≤ 0.01) in the discriminating the perceptual categories eliciting the various electrical potentials by statistical analyses. Therefore, the ERP markers identified in this study could be significant tools for optimizing BCI systems [pattern recognition or artificial intelligence (AI) algorithms] applied to EEG/ERP signals.

4.
Eur J Neurosci ; 51(9): 1987-2007, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-31837173

RESUMEN

The neural mechanisms involved in the processing of vocalizations and music were compared, in order to observe possible similarities in the encoding of their emotional content. Positive and negative emotional vocalizations (e.g. laughing, crying) and violin musical stimuli digitally extracted from them were used as stimuli. They shared the melodic profile and main pitch/frequency characteristics. Participants listened to vocalizations or music while detecting rare auditory targets (bird tweeting, or piano's arpeggios). EEG was recorded from 128 sites. P2, N400 and Late positivity responses of ERPs were analysed. P2 peak was earlier in response to vocalizations, while P2 amplitude was larger to positive than negative stimuli. N400 was greater to negative than positive stimuli. LP was greater to vocalizations than music and to positive than negative stimuli. Source modelling using swLORETA suggested that, among N400 generators, the left middle temporal gyrus and the right uncus responded to both music and vocalizations, and more to negative than positive stimuli. The right parahippocampal region of the limbic lobe and the right cingulate cortex were active during music listening, while the left superior temporal cortex only responded to human vocalizations. Negative stimuli always activated the right middle temporal gyrus, whereas positively valenced stimuli always activated the inferior frontal cortex. The processing of emotional vocalizations and music seemed to involve common neural mechanisms. Notation obtained from acoustic signals showed how emotionally negative stimuli tended to be in Minor key, and positive stimuli in Major key, thus shedding some lights on the brain ability to understand music.


Asunto(s)
Música , Estimulación Acústica , Percepción Auditiva , Mapeo Encefálico , Electroencefalografía , Emociones , Potenciales Evocados , Femenino , Humanos , Masculino
5.
Neuroscience ; 385: 215-226, 2018 08 10.
Artículo en Inglés | MEDLINE | ID: mdl-29932985

RESUMEN

In this study the timing of electromagnetic signals recorded during incongruent and congruent audiovisual (AV) stimulation in 14 Italian healthy volunteers was examined. In a previous study (Proverbio et al., 2016) we investigated the McGurk effect in the Italian language and found out which visual and auditory inputs provided the most compelling illusory effects (e.g., bilabial phonemes presented acoustically and paired with non-labials, especially alveolar-nasal and velar-occlusive phonemes). In this study EEG was recorded from 128 scalp sites while participants observed a female and a male actor uttering 288 syllables selected on the basis of the previous investigation (lasting approximately 600 ms) and responded to rare targets (/re/, /ri/, /ro/, /ru/). In half of the cases the AV information was incongruent, except for targets that were always congruent. A pMMN (phonological Mismatch Negativity) to incongruent AV stimuli was identified 500 ms after voice onset time. This automatic response indexed the detection of an incongruity between the labial and phonetic information. SwLORETA (Low-Resolution Electromagnetic Tomography) analysis applied to the difference voltage incongruent-congruent in the same time window revealed that the strongest sources of this activity were the right superior temporal (STG) and superior frontal gyri, which supports their involvement in AV integration.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Potenciales Evocados/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Electroencefalografía , Femenino , Voluntarios Sanos , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa , Adulto Joven
6.
Brain Res ; 1691: 15-25, 2018 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-29684337

RESUMEN

Literature has shown that playing a musical instrument is associated with the formation of multimodal audio visuomotor representations that are strongly instrument-specific. Here, we investigated the effect of increased motor practice on perceptual sensitivity in 32 professional musicians of comparable expertise but with different amounts of instrumental practice with piano (10,000 vs. 3000 estimated hours). Stimuli consisted of images of pianists' hands and piano arpeggio sounds. In half of the cases, the piano fingering and piano sounds were congruent, while they were incongruent in the other cases. ERPs were recorded from 128 sites while musicians performed a congruent vs. incongruent discrimination task. A fronto-central error-related negativity (ERN), mainly generated within the anterior cingulate cortex, was observed in response to incongruent videos only in pianists. Non-pianist musicians were able to carry out the task (with a worse performance) but exhibited a smaller response-related N400 to incongruent stimuli. Source reconstruction applied to ERP responses to incongruent stimuli indicated a less automatic mechanism for detecting sensory-motor deviance and a greater emphasis on visual rather than on acoustic features in non-pianists. Overall the data suggest a profound difference between the two populations of musicians and advise against considering "expert" populations to include those that undertook only a few weeks/months of training in a new discipline.


Asunto(s)
Encéfalo/fisiología , Música , Percepción/fisiología , Práctica Psicológica , Competencia Profesional , Desempeño Psicomotor/fisiología , Estimulación Acústica , Adulto , Encéfalo/diagnóstico por imagen , Comprensión/fisiología , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Factores de Tiempo , Adulto Joven
7.
Eur J Neurosci ; 44(6): 2340-56, 2016 09.
Artículo en Inglés | MEDLINE | ID: mdl-27421883

RESUMEN

It was investigated to what extent musical expertise influences the auditory processing of harmonicity by recording event-related potentials. Thirty-four participants (18 musicians and 16 controls) were asked to listen to hundreds of chords, differing in their degree of consonance, their complexity (from two to six composing sounds) and their range (distance of two adjacent pitches, from quartertones to more than 18 semitone steps). The task consisted of detecting rare targets. An early auditory N1 was observed that was modulated by chord dissonance in both groups. The response was generated in the right medial temporal gyrus (MTG) for consonant chords but in the left MTG for dissonant chords according to swLORETA reconstruction performed. An anterior negativity (N2) was enhanced only in musicians in response to chords featuring quartertones, thus suggesting a greater pitch sensitivity for simultaneous pure tones in the skilled brain. The P300 was affected by the frequency range only in musicians, who also showed a greater sensitivity to sound complexity. A strong left hemispheric specialization for processing quartertones in the left temporal cortex of musicians was observed at N2 level (250-350 ms), which was observed on the right side in controls. Additionally, in controls, widespread activity of the right limbic area was associated with listening to close frequencies causing disturbing beats, possibly suggesting a negative aesthetic appreciation for these stimuli. Overall, the data show a finer and more tuned neural representation of pitch intervals in musicians, linked to a marked specialization of their left temporal cortex (BA21/38).


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Potenciales Evocados Auditivos/fisiología , Sonido , Estimulación Acústica/métodos , Adulto , Electroencefalografía/métodos , Potenciales Evocados , Femenino , Humanos , Masculino , Música , Adulto Joven
8.
Sci Rep ; 5: 15219, 2015 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-26469712

RESUMEN

The aim of this study was to investigate how background auditory processing can affect other perceptual and cognitive processes as a function of stimulus content, style and emotional nature. Previous studies have offered contrasting evidence, and it has been recently shown that listening to music negatively affected concurrent mental processing in the elderly but not in young adults. To further investigate this matter, the effect of listening to music vs. listening to the sound of rain or silence was examined by administering an old/new face memory task (involving 448 unknown faces) to a group of 54 non-musician university students. Heart rate and diastolic and systolic blood pressure were measured during an explicit face study session that was followed by a memory test. The results indicated that more efficient and faster recall of faces occurred under conditions of silence or when participants were listening to emotionally touching music. Whereas auditory background (e.g., rain or joyful music) interfered with memory encoding, listening to emotionally touching music improved memory and significantly increased heart rate. It is hypothesized that touching music is able to modify the visual perception of faces by binding facial properties with auditory and emotionally charged information (music), which may therefore result in deeper memory encoding.


Asunto(s)
Memoria Episódica , Música , Estimulación Acústica , Adolescente , Adulto , Presión Sanguínea/fisiología , Cara , Femenino , Frecuencia Cardíaca/fisiología , Humanos , Masculino , Tiempo de Reacción , Percepción Visual , Adulto Joven
9.
Sci Rep ; 1: 54, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22355573

RESUMEN

As the makers of silent movies knew well, it is not necessary to provide an actual auditory stimulus to activate the sensation of sounds typically associated with what we are viewing. Thus, you could almost hear the neigh of Rodolfo Valentino's horse, even though the film was mute. Evidence is provided that the mere sight of a photograph associated with a sound can activate the associative auditory cortex. High-density ERPs were recorded in 15 participants while they viewed hundreds of perceptually matched images that were associated (or not) with a given sound. Sound stimuli were discriminated from non-sound stimuli as early as 110 ms. SwLORETA reconstructions showed common activation of ventral stream areas for both types of stimuli and of the associative temporal cortex, at the earliest stage, only for sound stimuli. The primary auditory cortex (BA41) was also activated by sound images after approximately 200 ms.


Asunto(s)
Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Adulto , Mapeo Encefálico , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA