Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.493
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
J Neurosci ; 44(10)2024 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-38199864

RESUMO

During communication in real-life settings, our brain often needs to integrate auditory and visual information and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging and magnetoencephalography to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing nonlinear signal interactions, was enhanced in the left frontotemporal and frontal regions. Focusing on the left inferior frontal gyrus, this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.


Assuntos
Encéfalo , Percepção da Fala , Humanos , Masculino , Feminino , Encéfalo/fisiologia , Percepção Visual/fisiologia , Magnetoencefalografia , Fala/fisiologia , Atenção/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Estimulação Luminosa
2.
Cereb Cortex ; 34(13): 84-93, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38696598

RESUMO

Multimodal integration is crucial for human interaction, in particular for social communication, which relies on integrating information from various sensory modalities. Recently a third visual pathway specialized in social perception was proposed, which includes the right superior temporal sulcus (STS) playing a key role in processing socially relevant cues and high-level social perception. Importantly, it has also recently been proposed that the left STS contributes to audiovisual integration of speech processing. In this article, we propose that brain areas along the right STS that support multimodal integration for social perception and cognition can be considered homologs to those in the left, language-dominant hemisphere, sustaining multimodal integration of speech and semantic concepts fundamental for social communication. Emphasizing the significance of the left STS in multimodal integration and associated processes such as multimodal attention to socially relevant stimuli, we underscore its potential relevance in comprehending neurodevelopmental conditions characterized by challenges in social communication such as autism spectrum disorder (ASD). Further research into this left lateral processing stream holds the promise of enhancing our understanding of social communication in both typical development and ASD, which may lead to more effective interventions that could improve the quality of life for individuals with atypical neurodevelopment.


Assuntos
Cognição Social , Percepção da Fala , Lobo Temporal , Humanos , Lobo Temporal/fisiologia , Lobo Temporal/fisiopatologia , Percepção da Fala/fisiologia , Percepção Social , Transtorno Autístico/fisiopatologia , Transtorno Autístico/psicologia , Lateralidade Funcional/fisiologia
3.
Cereb Cortex ; 34(8)2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39183363

RESUMO

Numerous studies on perceptual training exist, however, most have focused on the precision of temporal audiovisual perception, while fewer have concentrated on ability promotion for audiovisual integration (AVI). To investigate these issues, continuous 5-day audiovisual perceptual training was applied, during which electroencephalography was performed in response to auditory-only (A), visual-only (V) and audiovisual (AV) stimuli before and after training. The results showed that the perceptual sensitivity was greater for training group than for control group and was greater in the posttest than in the pretest. The response to the AV stimulus was significantly faster in the posttest than in the pretest for the older training group but was significantly greater for A and V stimuli for the younger training group. Electroencephalography analysis found higher P3 AVI amplitudes [AV-(A + V)] in the posttest than in the pretest for training group, which were subsequently reflected by an increased alpha (8-12 Hz) oscillatory response and strengthened global functional connectivity (weighted phase lag index). Furthermore, these facilitations were greater for older training groups than for younger training groups. These results confirm the age-related compensatory mechanism for AVI may be strengthened as audiovisual perceptual training progresses, providing an effective candidate for cognitive intervention in older adults.


Assuntos
Estimulação Acústica , Ritmo alfa , Percepção Auditiva , Estimulação Luminosa , Percepção Visual , Humanos , Masculino , Feminino , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Idoso , Ritmo alfa/fisiologia , Estimulação Luminosa/métodos , Eletroencefalografia , Pessoa de Meia-Idade , Envelhecimento/fisiologia , Adulto Jovem , Encéfalo/fisiologia , Adulto
4.
J Neurosci ; 43(25): 4697-4708, 2023 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-37221094

RESUMO

Previous work has demonstrated that performance in an auditory selective attention task can be enhanced or impaired, depending on whether a task-irrelevant visual stimulus is temporally coherent with a target auditory stream or with a competing distractor. However, it remains unclear how audiovisual (AV) temporal coherence and auditory selective attention interact at the neurophysiological level. Here, we measured neural activity using EEG while human participants (men and women) performed an auditory selective attention task, detecting deviants in a target audio stream. The amplitude envelope of the two competing auditory streams changed independently, while the radius of a visual disk was manipulated to control the AV coherence. Analysis of the neural responses to the sound envelope demonstrated that auditory responses were enhanced largely independently of the attentional condition: both target and masker stream responses were enhanced when temporally coherent with the visual stimulus. In contrast, attention enhanced the event-related response evoked by the transient deviants, largely independently of AV coherence. These results provide evidence for dissociable neural signatures of bottom-up (coherence) and top-down (attention) effects in AV object formation.SIGNIFICANCE STATEMENT Temporal coherence between auditory stimuli and task-irrelevant visual stimuli can enhance behavioral performance in auditory selective attention tasks. However, how audiovisual temporal coherence and attention interact at the neural level has not been established. Here, we measured EEG during a behavioral task designed to independently manipulate audiovisual coherence and auditory selective attention. While some auditory features (sound envelope) could be coherent with visual stimuli, other features (timbre) were independent of visual stimuli. We find that audiovisual integration can be observed independently of attention for sound envelopes temporally coherent with visual stimuli, while the neural responses to unexpected timbre changes are most strongly modulated by attention. Our results provide evidence for dissociable neural mechanisms of bottom-up (coherence) and top-down (attention) effects on audiovisual object formation.


Assuntos
Percepção Auditiva , Potenciais Evocados , Masculino , Humanos , Feminino , Potenciais Evocados/fisiologia , Percepção Auditiva/fisiologia , Atenção/fisiologia , Som , Estimulação Acústica , Percepção Visual/fisiologia , Estimulação Luminosa
5.
J Neurosci ; 43(23): 4352-4364, 2023 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-37160365

RESUMO

Cognitive demand is thought to modulate two often used, but rarely combined, measures: pupil size and neural α (8-12 Hz) oscillatory power. However, it is unclear whether these two measures capture cognitive demand in a similar way under complex audiovisual-task conditions. Here we recorded pupil size and neural α power (using electroencephalography), while human participants of both sexes concurrently performed a visual multiple object-tracking task and an auditory gap detection task. Difficulties of the two tasks were manipulated independent of each other. Participants' performance decreased in accuracy and speed with increasing cognitive demand. Pupil size increased with increasing difficulty for both the auditory and the visual task. In contrast, α power showed diverging neural dynamics: parietal α power decreased with increasing difficulty in the visual task, but not with increasing difficulty in the auditory task. Furthermore, independent of task difficulty, within-participant trial-by-trial fluctuations in pupil size were negatively correlated with α power. Difficulty-induced changes in pupil size and α power, however, did not correlate, which is consistent with their different cognitive-demand sensitivities. Overall, the current study demonstrates that the dynamics of the neurophysiological indices of cognitive demand and associated effort are multifaceted and potentially modality-dependent under complex audiovisual-task conditions.SIGNIFICANCE STATEMENT Pupil size and oscillatory α power are associated with cognitive demand and effort, but their relative sensitivity under complex audiovisual-task conditions is unclear, as is the extent to which they share underlying mechanisms. Using an audiovisual dual-task paradigm, we show that pupil size increases with increasing cognitive demands for both audition and vision. In contrast, changes in oscillatory α power depend on the respective task demands: parietal α power decreases with visual demand but not with auditory task demand. Hence, pupil size and α power show different sensitivity to cognitive demands, perhaps suggesting partly different underlying neural mechanisms.


Assuntos
Percepção Auditiva , Pupila , Masculino , Feminino , Humanos , Pupila/fisiologia , Percepção Auditiva/fisiologia , Eletroencefalografia , Desempenho Psicomotor/fisiologia , Cognição
6.
Neuroimage ; 285: 120483, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38048921

RESUMO

The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.


Assuntos
Imageamento por Ressonância Magnética , Realidade Virtual , Humanos , Aprendizagem , Encéfalo/fisiologia , Percepção Visual , Cegueira , Percepção Auditiva
7.
J Neurophysiol ; 131(6): 1311-1327, 2024 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-38718414

RESUMO

Tinnitus is the perception of a continuous sound in the absence of an external source. Although the role of the auditory system is well investigated, there is a gap in how multisensory signals are integrated to produce a single percept in tinnitus. Here, we train participants to learn a new sensory environment by associating a cue with a target signal that varies in perceptual threshold. In the test phase, we present only the cue to see whether the person perceives an illusion of the target signal. We perform two separate experiments to observe the behavioral and electrophysiological responses to the learning and test phases in 1) healthy young adults and 2) people with continuous subjective tinnitus and matched control subjects. We observed that in both parts of the study the percentage of false alarms was negatively correlated with the 75% detection threshold. Additionally, the perception of an illusion goes together with increased evoked response potential in frontal regions of the brain. Furthermore, in patients with tinnitus, we observe no significant difference in behavioral or evoked response in the auditory paradigm, whereas patients with tinnitus were more likely to report false alarms along with increased evoked activity during the learning and test phases in the visual paradigm. This emphasizes the importance of integrity of sensory pathways in multisensory integration and how this process may be disrupted in people with tinnitus. Furthermore, the present study also presents preliminary data supporting evidence that tinnitus patients may be building stronger perceptual models, which needs future studies with a larger population to provide concrete evidence on.NEW & NOTEWORTHY Tinnitus is the continuous phantom perception of a ringing in the ears. Recently, it has been suggested that tinnitus may be a maladaptive inference of the brain to auditory anomalies, whether they are detected or undetected by an audiogram. The present study presents empirical evidence for this hypothesis by inducing an illusion in a sensory domain that is damaged (auditory) and one that is intact (visual). It also presents novel information about how people with tinnitus process multisensory stimuli in the audio-visual domain.


Assuntos
Percepção Auditiva , Teorema de Bayes , Ilusões , Zumbido , Humanos , Zumbido/fisiopatologia , Projetos Piloto , Masculino , Feminino , Adulto , Percepção Auditiva/fisiologia , Ilusões/fisiologia , Percepção Visual/fisiologia , Adulto Jovem , Eletroencefalografia , Estimulação Acústica , Sinais (Psicologia)
8.
Eur J Neurosci ; 59(7): 1770-1788, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38230578

RESUMO

Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.


Assuntos
Percepção Auditiva , Localização de Som , Estimulação Acústica , Estimulação Luminosa , Percepção Visual , Humanos
9.
Eur J Neurosci ; 59(8): 1918-1932, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37990611

RESUMO

The unconscious integration of vocal and facial cues during speech perception facilitates face-to-face communication. Recent studies have provided substantial behavioural evidence concerning impairments in audiovisual (AV) speech perception in schizophrenia. However, the specific neurophysiological mechanism underlying these deficits remains unknown. Here, we investigated activities and connectivities centered on the auditory cortex during AV speech perception in schizophrenia. Using magnetoencephalography, we recorded and analysed event-related fields in response to auditory (A: voice), visual (V: face) and AV (voice-face) stimuli in 23 schizophrenia patients (13 males) and 22 healthy controls (13 males). The functional connectivity associated with the subadditive response to AV stimulus (i.e., [AV] < [A] + [V]) was also compared between the two groups. Within the healthy control group, [AV] activity was smaller than the sum of [A] and [V] at latencies of approximately 100 ms in the posterior ramus of the lateral sulcus in only the left hemisphere, demonstrating a subadditive N1m effect. Conversely, the schizophrenia group did not show such a subadditive response. Furthermore, weaker functional connectivity from the posterior ramus of the lateral sulcus of the left hemisphere to the fusiform gyrus of the right hemisphere was observed in schizophrenia. Notably, this weakened connectivity was associated with the severity of negative symptoms. These results demonstrate abnormalities in connectivity between speech- and face-related cortical areas in schizophrenia. This aberrant subadditive response and connectivity deficits for integrating speech and facial information may be the neural basis of social communication dysfunctions in schizophrenia.


Assuntos
Córtex Auditivo , Esquizofrenia , Percepção da Fala , Masculino , Humanos , Percepção da Fala/fisiologia , Magnetoencefalografia , Fala/fisiologia , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos
10.
Eur J Neurosci ; 59(12): 3203-3223, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38637993

RESUMO

Social communication draws on several cognitive functions such as perception, emotion recognition and attention. The association of audio-visual information is essential to the processing of species-specific communication signals. In this study, we use functional magnetic resonance imaging in order to identify the subcortical areas involved in the cross-modal association of visual and auditory information based on their common social meaning. We identified three subcortical regions involved in audio-visual processing of species-specific communicative signals: the dorsolateral amygdala, the claustrum and the pulvinar. These regions responded to visual, auditory congruent and audio-visual stimulations. However, none of them was significantly activated when the auditory stimuli were semantically incongruent with the visual context, thus showing an influence of visual context on auditory processing. For example, positive vocalization (coos) activated the three subcortical regions when presented in the context of positive facial expression (lipsmacks) but not when presented in the context of negative facial expression (aggressive faces). In addition, the medial pulvinar and the amygdala presented multisensory integration such that audiovisual stimuli resulted in activations that were significantly higher than those observed for the highest unimodal response. Last, the pulvinar responded in a task-dependent manner, along a specific spatial sensory gradient. We propose that the dorsolateral amygdala, the claustrum and the pulvinar belong to a multisensory network that modulates the perception of visual socioemotional information and vocalizations as a function of the relevance of the stimuli in the social context. SIGNIFICANCE STATEMENT: Understanding and correctly associating socioemotional information across sensory modalities, such that happy faces predict laughter and escape scenes predict screams, is essential when living in complex social groups. With the use of functional magnetic imaging in the awake macaque, we identify three subcortical structures-dorsolateral amygdala, claustrum and pulvinar-that only respond to auditory information that matches the ongoing visual socioemotional context, such as hearing positively valenced coo calls and seeing positively valenced mutual grooming monkeys. We additionally describe task-dependent activations in the pulvinar, organizing along a specific spatial sensory gradient, supporting its role as a network regulator.


Assuntos
Tonsila do Cerebelo , Percepção Auditiva , Claustrum , Imageamento por Ressonância Magnética , Pulvinar , Percepção Visual , Pulvinar/fisiologia , Tonsila do Cerebelo/fisiologia , Tonsila do Cerebelo/diagnóstico por imagem , Masculino , Animais , Percepção Auditiva/fisiologia , Claustrum/fisiologia , Percepção Visual/fisiologia , Feminino , Expressão Facial , Macaca , Estimulação Luminosa/métodos , Mapeamento Encefálico , Estimulação Acústica , Vocalização Animal/fisiologia , Percepção Social
11.
Eur J Neurosci ; 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39188179

RESUMO

While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1-1.75 and 2.5-3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.

12.
Hum Brain Mapp ; 45(11): e26797, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39041175

RESUMO

Speech comprehension is crucial for human social interaction, relying on the integration of auditory and visual cues across various levels of representation. While research has extensively studied multisensory integration (MSI) using idealised, well-controlled stimuli, there is a need to understand this process in response to complex, naturalistic stimuli encountered in everyday life. This study investigated behavioural and neural MSI in neurotypical adults experiencing audio-visual speech within a naturalistic, social context. Our novel paradigm incorporated a broader social situational context, complete words, and speech-supporting iconic gestures, allowing for context-based pragmatics and semantic priors. We investigated MSI in the presence of unimodal (auditory or visual) or complementary, bimodal speech signals. During audio-visual speech trials, compared to unimodal trials, participants more accurately recognised spoken words and showed a more pronounced suppression of alpha power-an indicator of heightened integration load. Importantly, on the neural level, these effects surpassed mere summation of unimodal responses, suggesting non-linear MSI mechanisms. Overall, our findings demonstrate that typically developing adults integrate audio-visual speech and gesture information to facilitate speech comprehension in noisy environments, highlighting the importance of studying MSI in ecologically valid contexts.


Assuntos
Gestos , Percepção da Fala , Humanos , Feminino , Masculino , Percepção da Fala/fisiologia , Adulto Jovem , Adulto , Percepção Visual/fisiologia , Eletroencefalografia , Compreensão/fisiologia , Estimulação Acústica , Fala/fisiologia , Encéfalo/fisiologia , Estimulação Luminosa/métodos
13.
Hum Brain Mapp ; 45(12): e70009, 2024 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-39185690

RESUMO

Attention and crossmodal interactions are closely linked through a complex interplay at different stages of sensory processing. Within the context of motion perception, previous research revealed that attentional demands alter audiovisual interactions in the temporal domain. In the present study, we aimed to understand the neurophysiological correlates of these attentional modulations. We utilized an audiovisual motion paradigm that elicits auditory time interval effects on perceived visual speed. The audiovisual interactions in the temporal domain were quantified by changes in perceived visual speed across different auditory time intervals. We manipulated attentional demands in the visual field by having a secondary task on a stationary object (i.e., single- vs. dual-task conditions). When the attentional demands were high (i.e., dual-task condition), there was a significant decrease in the effects of auditory time interval on perceived visual speed, suggesting a reduction in audiovisual interactions. Moreover, we found significant differences in both early and late neural activities elicited by visual stimuli across task conditions (single vs. dual), reflecting an overall increase in attentional demands in the visual field. Consistent with the changes in perceived visual speed, the audiovisual interactions in neural signals declined in the late positive component range. Compared with the findings from previous studies using different paradigms, our findings support the view that attentional modulations of crossmodal interactions are not unitary and depend on task-specific components. They also have important implications for motion processing and speed estimation in daily life situations where sensory relevance and attentional demands constantly change.


Assuntos
Atenção , Percepção Auditiva , Eletroencefalografia , Estimulação Luminosa , Campos Visuais , Humanos , Atenção/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Percepção Auditiva/fisiologia , Campos Visuais/fisiologia , Estimulação Luminosa/métodos , Percepção de Movimento/fisiologia , Estimulação Acústica , Percepção Visual/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia
14.
BMC Neurosci ; 25(1): 40, 2024 Aug 27.
Artigo em Inglês | MEDLINE | ID: mdl-39192193

RESUMO

BACKGROUND: Using event-related potentials (ERPs), we aimed to investigate audiovisual integration neural mechanisms during a letter identification task in the left and right sides. Unimodal (A,V) and bimodal (AV) stimuli were presented on either side, with ERPs from unimodal (A,V) stimuli on the same side being compared to those from simultaneous bimodal stimuli (AV). Non-zero results of the AV-(A + V) difference waveforms indicated audiovisual integration on the left/right side. RESULTS: When spatially coherent AV stimuli were presented on the right side, two significant ERP components in the integrated differential wave were noted. The N134 and N262, present in the first 300 ms of the AV-(A + V) integration difference wave, indicated significant audiovisual integration effects. However, when these stimuli were presented on the left side, there were no significant integration components. This audiovisual integration difference may stem from left/right asymmetry of cerebral hemisphere language processing. CONCLUSIONS: Audiovisual letter information presented on the right side was easier to integrate, process, and represent. Additionally, only one significant integrative component peaked at 140 ms in the parietal cortex for spatially non-coherent AV stimuli and provided audiovisual multisensory integration, which could be attributed to some integrative neural processes that depend on the spatial congruity of the auditory and visual stimuli.


Assuntos
Estimulação Acústica , Percepção Auditiva , Eletroencefalografia , Potenciais Evocados , Lateralidade Funcional , Estimulação Luminosa , Percepção Visual , Humanos , Masculino , Feminino , Adulto Jovem , Percepção Auditiva/fisiologia , Lateralidade Funcional/fisiologia , Percepção Visual/fisiologia , Estimulação Luminosa/métodos , Adulto , Estimulação Acústica/métodos , Potenciais Evocados/fisiologia , Encéfalo/fisiologia , Tempo de Reação/fisiologia
15.
Dev Sci ; 27(2): e13436, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37551932

RESUMO

The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants' word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants' familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words. RESEARCH HIGHLIGHTS: We examined 10-month-old infants' ERP word familiarity response using audio-visual stories, in which a speaker addressed infants with direct or averted gaze while speaking. Ten-month-old infants can segment and recognise familiar words from audio-visual speech, indicated by their negative-going ERP response to familiar, relative to novel, words. This negative-going ERP word familiarity effect was present for isolated words over left-frontal electrodes regardless of whether the speaker offered eye contact while speaking. An additional positivity in response to familiar words was observed for direct gaze only, over right fronto-central and central electrodes.


Assuntos
Percepção da Fala , Fala , Lactente , Humanos , Fala/fisiologia , Fixação Ocular , Idioma , Potenciais Evocados/fisiologia , Percepção da Fala/fisiologia
16.
Dev Sci ; 27(1): e13431, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37403418

RESUMO

As reading is inherently a multisensory, audiovisual (AV) process where visual symbols (i.e., letters) are connected to speech sounds, the question has been raised whether individuals with reading difficulties, like children with developmental dyslexia (DD), have broader impairments in multisensory processing. This question has been posed before, yet it remains unanswered due to (a) the complexity and contentious etiology of DD along with (b) lack of consensus on developmentally appropriate AV processing tasks. We created an ecologically valid task for measuring multisensory AV processing by leveraging the natural phenomenon that speech perception improves when listeners are provided visual information from mouth movements (particularly when the auditory signal is degraded). We designed this AV processing task with low cognitive and linguistic demands such that children with and without DD would have equal unimodal (auditory and visual) performance. We then collected data in a group of 135 children (age 6.5-15) with an AV speech perception task to answer the following questions: (1) How do AV speech perception benefits manifest in children, with and without DD? (2) Do children all use the same perceptual weights to create AV speech perception benefits, and (3) what is the role of phonological processing in AV speech perception? We show that children with and without DD have equal AV speech perception benefits on this task, but that children with DD rely less on auditory processing in more difficult listening situations to create these benefits and weigh both incoming information streams differently. Lastly, any reported differences in speech perception in children with DD might be better explained by differences in phonological processing than differences in reading skills. RESEARCH HIGHLIGHTS: Children with versus without developmental dyslexia have equal audiovisual speech perception benefits, regardless of their phonological awareness or reading skills. Children with developmental dyslexia rely less on auditory performance to create audiovisual speech perception benefits. Individual differences in speech perception in children might be better explained by differences in phonological processing than differences in reading skills.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Adolescente , Dislexia/psicologia , Leitura , Fonética , Conscientização
17.
Brain Topogr ; 2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-38990422

RESUMO

Shooting is a fine sport that is greatly influenced by mental state, and the neural activity of brain in the preparation stage of shooting has a direct influence on the level of shooting. In order to explore the brain neural mechanism in the preparation stage of pistol shooting under audiovisual restricted conditions, and to reveal the intrinsic relationship between brain activity and shooting behavior indicators, the electroencephalography (EEG) signals and seven shooting behaviors including shooting performance, gun holding stability, and firing stability, were experimentally captured from 30 shooters, these shooters performed pistol shooting under three conditions, normal, dim, and noisy. Using EEG microstates combined with standardized low-resolution brain electromagnetic tomography (sLORETA) traceability analysis method, we investigated the difference between the microstates characteristics under audiovisual restricted conditions and normal condition, the relationship between the microstates characteristics and the behavioral indicators during the shooting preparation stage under different conditions. The experimental results showed that microstate 1 corresponded to microstate A, microstate 2 corresponded to microstate B, and microstate 4 corresponded to microstate D; Microstate 3 was a unique template, which was localized in the occipital lobe, its function was to generate the "vision for action"; The dim condition significantly reduced the shooter's performance, whereas the noisy condition had less effect on the shooter's performance; In audiovisual restricted conditions, the microstate characteristics were significantly different from those in the normal condition. Microstate 4' parameters decreased significantly while microstate 3' parameters increased significantly under restricted visual and auditory conditions; Dim condition required more shooting skills from the shooter; There was a significant relationship between characteristics of microstates and indicators of shooting behavior; It was concluded that in order to obtain good shooting performance, shooters should improve attention and concentrate on the adjustment of collimator and target's center leveling relation, but the focus was slightly different in the three conditions; Microstates that are more important for accomplishing the task have less variation in their characteristics over time; Similar conclusions to previous studies were obtained at the same time, i.e., increased visual attention prior to shooting is detrimental to shooting performance, and there is a high positive correlation with microstate D for task completion. The experimental results further reveal the brain neural mechanism in the shooting preparation stage, and the extracted neural markers can be used as effective functional indicators for monitoring the brain state in the shooting preparation stage of pistols.

18.
Brain Cogn ; 178: 106180, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38815526

RESUMO

Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.


Assuntos
Estimulação Acústica , Percepção Auditiva , Eletroencefalografia , Potenciais Evocados , Estimulação Luminosa , Percepção Visual , Humanos , Adulto , Feminino , Masculino , Lactente , Eletroencefalografia/métodos , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Adolescente , Criança , Pré-Escolar , Adulto Jovem , Potenciais Evocados/fisiologia , Estimulação Luminosa/métodos , Estudos Transversais , Estimulação Acústica/métodos , Encéfalo/fisiologia
19.
Cereb Cortex ; 33(8): 4202-4215, 2023 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-36068947

RESUMO

The pulvinar is a heterogeneous thalamic nucleus, which is well developed in primates. One of its subdivisions, the medial pulvinar, is connected to many cortical areas, including the visual, auditory, and somatosensory cortices, as well as with multisensory areas and premotor areas. However, except for the visual modality, little is known about its sensory functions. A hypothesis is that, as a region of convergence of information from different sensory modalities, the medial pulvinar plays a role in multisensory integration. To test this hypothesis, 2 macaque monkeys were trained to a fixation task and the responses of single-units to visual, auditory, and auditory-visual stimuli were examined. Analysis revealed auditory, visual, and multisensory neurons in the medial pulvinar. It also revealed multisensory integration in this structure, mainly suppressive (the audiovisual response is less than the strongest unisensory response) and subadditive (the audiovisual response is less than the sum of the auditory and the visual responses). These findings suggest that the medial pulvinar is involved in multisensory integration.


Assuntos
Pulvinar , Animais , Macaca , Haplorrinos , Neurônios/fisiologia , Sensação , Percepção Auditiva/fisiologia , Estimulação Acústica , Estimulação Luminosa , Percepção Visual/fisiologia
20.
Cereb Cortex ; 33(9): 5574-5584, 2023 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-36336347

RESUMO

People can seamlessly integrate a vast array of information from what they see and hear in the noisy and uncertain world. However, the neural underpinnings of audiovisual integration continue to be a topic of debate. Using strict inclusion criteria, we performed an activation likelihood estimation meta-analysis on 121 neuroimaging experiments with a total of 2,092 participants. We found that audiovisual integration is linked with the coexistence of multiple integration sites, including early cortical, subcortical, and higher association areas. Although activity was consistently found within the superior temporal cortex, different portions of this cortical region were identified depending on the analytical contrast used, complexity of the stimuli, and modality within which attention was directed. The context-dependent neural activity related to audiovisual integration suggests a flexible rather than fixed neural pathway for audiovisual integration. Together, our findings highlight a flexible multiple pathways model for audiovisual integration, with superior temporal cortex as the central node in these neural assemblies.


Assuntos
Percepção Auditiva , Percepção Visual , Humanos , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Imageamento por Ressonância Magnética/métodos , Encéfalo/fisiologia , Neuroimagem , Estimulação Luminosa , Mapeamento Encefálico , Estimulação Acústica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA