RESUMO
The processing of auditory stimuli which are structured in time is thought to involve the arcuate fasciculus, the white matter tract which connects the temporal cortex and the inferior frontal gyrus. Research has indicated effects of both musical and language experience on the structural characteristics of the arcuate fasciculus. Here, we investigated in a sample of n = 84 young adults whether continuous conceptualizations of musical and multilingual experience related to structural characteristics of the arcuate fasciculus, measured using diffusion tensor imaging. Probabilistic tractography was used to identify the dorsal and ventral parts of the white matter tract. Linear regressions indicated that different aspects of musical sophistication related to the arcuate fasciculus' volume (emotional engagement with music), volumetric asymmetry (musical training and music perceptual abilities), and fractional anisotropy (music perceptual abilities). Our conceptualization of multilingual experience, accounting for participants' proficiency in reading, writing, understanding, and speaking different languages, was not related to the structural characteristics of the arcuate fasciculus. We discuss our results in the context of other research on hemispheric specializations and a dual-stream model of auditory processing.
Assuntos
Percepção Auditiva , Imagem de Tensor de Difusão , Multilinguismo , Música , Substância Branca , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Substância Branca/diagnóstico por imagem , Substância Branca/fisiologia , Substância Branca/anatomia & histologia , Percepção Auditiva/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Lobo Temporal/anatomia & histologia , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiologia , Vias Neurais/anatomia & histologia , AdolescenteRESUMO
Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.
Assuntos
Percepção Auditiva , Sinais (Psicologia) , Navegação Espacial , Humanos , Masculino , Navegação Espacial/fisiologia , Feminino , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Percepção Espacial/fisiologia , Estimulação Luminosa/métodos , Baixa Visão/fisiopatologia , Realidade Virtual , Estimulação Acústica/métodos , Acuidade Visual/fisiologiaRESUMO
Many objective measurements have been proposed to evaluate sound reproduction, but it is often difficult to link measured differences with the differences perceived by listeners. In the literature, the best correlations with perception were obtained for measures involving an auditory model. The present study investigated simpler measurements to highlight the signal processing steps required to make the link with perception. It is based on dissimilarity evaluations from two previous studies: 1 study comparing 12 single loudspeakers using 3 musical excerpts, 1 study comparing 21 headphones using 2 musical excerpts, and both studies highlighting 2 perceptual dimensions associated with the relative strengths of bass and midrange. The objective approach compared several signal analyses computing the dissimilarity between the spectra of the recorded sound reproductions. The results show that a third-octave analysis can accurately describe the overall dissimilarity between the loudspeakers or headphones and the two underlying perceptual dimensions.
Assuntos
Estimulação Acústica , Acústica , Percepção Auditiva , Música , Humanos , Acústica/instrumentação , Adulto , Feminino , Masculino , Amplificadores Eletrônicos , Som , Adulto Jovem , Espectrografia do Som , Processamento de Sinais Assistido por Computador , Desenho de EquipamentoRESUMO
Physiological oscillations, such as those involved in brain activity, heartbeat, and respiration, display inherent rhythmicity across various timescales. However, adaptive behavior arises from the interaction between these intrinsic rhythms and external environmental cues. In this study, we used multimodal neurophysiological recordings, simultaneously capturing signals from the central and autonomic nervous systems (CNS and ANS), to explore the dynamics of brain and body rhythms in response to rhythmic auditory stimulation across three conditions: baseline (no auditory stimulation), passive auditory processing, and active auditory processing (discrimination task). Our findings demonstrate that active engagement with auditory stimulation synchronizes both CNS and ANS rhythms with the external rhythm, unlike passive and baseline conditions, as evidenced by power spectral density (PSD) and coherence analyses. Importantly, phase angle analysis revealed a consistent alignment across participants between their physiological oscillatory phases at stimulus or response onsets. This alignment was associated with reaction times, suggesting that certain phases of physiological oscillations are spontaneously prioritized across individuals due to their adaptive role in sensorimotor behavior. These results highlight the intricate interplay between CNS and ANS rhythms in optimizing sensorimotor responses to environmental demands, suggesting a potential mechanism of embodied predictive processing.
Assuntos
Estimulação Acústica , Humanos , Masculino , Feminino , Adulto , Estimulação Acústica/métodos , Adulto Jovem , Percepção Auditiva/fisiologia , Sistema Nervoso Autônomo/fisiologia , Eletroencefalografia/métodos , Tempo de Reação/fisiologia , Encéfalo/fisiologia , PeriodicidadeRESUMO
Within species, vocal and auditory systems presumably coevolved to converge on a critical temporal acoustic structure that can be best produced and perceived. While dogs cannot produce articulated sounds, they respond to speech, raising the question as to whether this heterospecific receptive ability could be shaped by exposure to speech or remains bounded by their own sensorimotor capacity. Using acoustic analyses of dog vocalisations, we show that their main production rhythm is slower than the dominant (syllabic) speech rate, and that human-dog-directed speech falls halfway in between. Comparative exploration of neural (electroencephalography) and behavioural responses to speech reveals that comprehension in dogs relies on a slower speech rhythm tracking (delta) than humans' (theta), even though dogs are equally sensitive to speech content and prosody. Thus, the dog audio-motor tuning differs from humans', and we hypothesise that humans may adjust their speech rate to this shared temporal channel as means to improve communication efficacy.
Assuntos
Fala , Vocalização Animal , Animais , Cães , Humanos , Vocalização Animal/fisiologia , Fala/fisiologia , Masculino , Feminino , Eletroencefalografia , Percepção Auditiva/fisiologia , Adulto , Interação Humano-Animal , Estimulação Acústica , Percepção da Fala/fisiologiaRESUMO
Frequent listening to unfamiliar music excerpts forms functional connectivity in the brain as music becomes familiar and memorable. However, where these connections spectrally arise in the cerebral cortex during music familiarization has yet to be determined. This study investigates electrophysiological changes in phase-based functional connectivity recorded with electroencephalography (EEG) from twenty participants' brains during thrice passive listening to initially unknown classical music excerpts. Functional connectivity is evaluated based on measuring phase synchronization between all pairwise combinations of EEG electrodes across all repetitions via repeated measures ANOVA and between every two repetitions of listening to unknown music with the weighted phase lag index (WPLI) method in different frequency bands. The results indicate an increased phase synchronization during gradual short-term familiarization between the right frontal and the right parietal areas in the theta and alpha bands. In addition, the increased phase synchronization is discovered between the right temporal areas and the right parietal areas at the theta band during gradual music familiarization. Overall, this study explores the short-term music familiarization effects on neural responses by revealing that repetitions form phasic coupling in the theta and alpha bands in the right hemisphere during passive listening.
Assuntos
Ritmo alfa , Percepção Auditiva , Eletroencefalografia , Lobo Frontal , Música , Lobo Parietal , Ritmo Teta , Humanos , Masculino , Feminino , Ritmo alfa/fisiologia , Adulto Jovem , Lobo Parietal/fisiologia , Ritmo Teta/fisiologia , Adulto , Percepção Auditiva/fisiologia , Lobo Frontal/fisiologia , Eletroencefalografia/métodos , Lobo Temporal/fisiologia , Reconhecimento Psicológico/fisiologia , Estimulação Acústica/métodosRESUMO
BACKGROUND: Which mammals show vocal learning abilities, e.g., can learn new sounds, or learn to use sounds in new contexts? Vocal usage and comprehension learning are submodules of vocal learning. Specifically, vocal usage learning is the ability to learn to use a vocalization in a new context; vocal comprehension learning is the ability to comprehend a vocalization in a new context. Among mammals, harbor seals (Phoca vitulina) are good candidates to investigate vocal learning. Here, we test whether harbor seals are capable of vocal usage and comprehension learning. RESULTS: We trained two harbor seals to (i) switch contexts from a visual to an auditory cue. In particular, the seals first produced two vocalization types in response to two hand signs; they then transitioned to producing these two vocalization types upon the presentation of two distinct sets of playbacks of their own vocalizations. We then (ii) exposed the seals to a combination of trained and novel vocalization stimuli. In a final experiment, (iii) we broadcasted only novel vocalizations of the two vocalization types to test whether seals could generalize from the trained set of stimuli to only novel items of a given vocal category. Both seals learned all tasks and took ≤ 16 sessions to succeed across all experiments. In particular, the seals showed contextual learning through switching the context from former visual to novel auditory cues, vocal matching and generalization. Finally, by responding to the played-back vocalizations with distinct vocalizations, the animals showed vocal comprehension learning. CONCLUSIONS: It has been suggested that harbor seals are vocal learners; however, to date, these observations had not been confirmed in controlled experiments. Here, through three experiments, we could show that harbor seals are capable of both vocal usage and comprehension learning.
Assuntos
Compreensão , Aprendizagem , Phoca , Vocalização Animal , Animais , Phoca/fisiologia , Vocalização Animal/fisiologia , Aprendizagem/fisiologia , Compreensão/fisiologia , Masculino , Estimulação Acústica , Feminino , Percepção Auditiva/fisiologia , Sinais (Psicologia)RESUMO
Listening to conversing talkers in quiet environments and remembering the content is a common activity. However, research on the cognitive demands involved is limited. This study investigates the relevance of individuals' cognitive functions for listeners' memory of two-talker conversations and their listening effort in quiet listening settings. A dual-task paradigm was employed to explore memory of conversational content and listening effort while analyzing the role of participants' (n = 29) working memory capacity (measured through the operation span task), attention (Frankfurt attention inventory 2), and information-processing speed (trail making test). In the primary task, participants listened to a conversation between a male and female talker and answered content-related questions. The two talkers' audio signals were presented through headphones, either spatially separated (+ /- 60°) or co-located (0°). Participants concurrently performed a vibrotactile pattern recognition task as a secondary task to measure listening effort. Results indicated that attention and processing speed were related to memory of conversational content and that all three cognitive functions were related to listening effort. Memory performance and listening effort were similar for spatially separated and co-located talkers when considering the psychometric measures. This research offers valuable insights into cognitive processes during two-talker conversations in quiet settings.
Assuntos
Atenção , Cognição , Percepção da Fala , Humanos , Masculino , Feminino , Cognição/fisiologia , Adulto , Atenção/fisiologia , Adulto Jovem , Percepção da Fala/fisiologia , Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Memória/fisiologiaRESUMO
Audible very-high frequency sound (VHFS) and ultrasound (US) have been rated more unpleasant than lower frequency sounds when presented to listeners at similar sensation levels (SLs). In this study, 17 participants rated the sensory unpleasantness of 14-, 16-, and 18-kHz tones and a 1-kHz reference tone. Tones were presented at equal subjective loudness levels for each individual, corresponding to levels of 10, 20, and 30 dB SL measured at 1 kHz. Participants were categorized as either "symptomatic" or "asymptomatic" based on self-reported previous symptoms that they attributed to exposure to VHFS/US. In both groups, subjective loudness increased more rapidly with sound pressure level for VHFS/US than for the 1-kHz reference tone, which is consistent with a reduced dynamic range at the higher frequencies. For loudness-matched tones, participants rated VHFS/US as more unpleasant than that for the 1-kHz reference. These results suggest that increased sensory unpleasantness and reduced dynamic range at high frequencies should be considered when designing or deploying equipment which emits VHFS/US that could be audible to exposed people.
Assuntos
Estimulação Acústica , Percepção Sonora , Ondas Ultrassônicas , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Som , Percepção Auditiva , Pressão , Limiar AuditivoRESUMO
A social individual needs to effectively manage the amount of complex information in his or her environment relative to his or her own purpose to obtain relevant information. This paper presents a neural architecture aiming to reproduce attention mechanisms (alerting/orienting/selecting) that are efficient in humans during audiovisual tasks in robots. We evaluated the system based on its ability to identify relevant sources of information on faces of subjects emitting vowels. We propose a developmental model of audio-visual attention (MAVA) combining Hebbian learning and a competition between saliency maps based on visual movement and audio energy. MAVA effectively combines bottom-up and top-down information to orient the system toward pertinent areas. The system has several advantages, including online and autonomous learning abilities, low computation time and robustness to environmental noise. MAVA outperforms other artificial models for detecting speech sources under various noise conditions.
Assuntos
Atenção , Robótica , Humanos , Robótica/métodos , Atenção/fisiologia , Lactente , Aprendizagem/fisiologia , Percepção Visual/fisiologia , Desenvolvimento da Linguagem , Percepção Auditiva/fisiologia , IdiomaRESUMO
Object-based attention operates both in perception and visual working memory. While the efficient perception of auditory stimuli also requires the formation of auditory objects, little is known about their role in auditory working memory (AWM). To investigate whether attention to one object feature in AWM leads to the involuntary maintenance of another, task-irrelevant feature, we conducted four experiments. Stimuli were abstract sounds that differed on the dimensions frequency and location, only one of which was task-relevant in each experiment. The first two experiments required a match-nonmatch decision about a probe sound whose irrelevant feature value could either be identical to or differ from the memorized stimulus. Matches on the relevant dimension were detected more accurately when the irrelevant feature matched as well, whereas for nonmatches on the relevant dimension, performance was better for irrelevant feature nonmatches. Signal-detection analysis showed that changes of irrelevant frequency reduced the sensitivity for sound location. Two further experiments used continuous report tasks. When location was the target feature, changes of irrelevant sound frequency had an impact on both recall error and adjustment time. Irrelevant location changes affected adjustment time only. In summary, object-based attention led to a concurrent maintenance of task-irrelevant sound features in AWM.
Assuntos
Estimulação Acústica , Atenção , Percepção Auditiva , Memória de Curto Prazo , Humanos , Memória de Curto Prazo/fisiologia , Feminino , Masculino , Percepção Auditiva/fisiologia , Adulto , Atenção/fisiologia , Adulto Jovem , Tempo de Reação/fisiologiaRESUMO
Extensive research with musicians has shown that instrumental musical training can have a profound impact on how acoustic features are processed in the brain. However, less is known about the influence of singing training on neural activity during voice perception, particularly in response to salient acoustic features, such as the vocal vibrato in operatic singing. To address this gap, the present study employed functional magnetic resonance imaging (fMRI) to measure brain responses in trained opera singers and musically untrained controls listening to recordings of opera singers performing in two distinct styles: a full operatic voice with vibrato, and a straight voice without vibrato. Results indicated that for opera singers, perception of operatic voice led to differential fMRI activations in bilateral auditory cortical regions and the default mode network. In contrast, musically untrained controls exhibited differences only in bilateral auditory cortex. These results suggest that operatic singing training triggers experience-dependent neural changes in the brain that activate self-referential networks, possibly through embodiment of acoustic features associated with one's own singing style.
Assuntos
Imageamento por Ressonância Magnética , Canto , Humanos , Canto/fisiologia , Masculino , Feminino , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Música , Rede de Modo Padrão/fisiologia , Córtex Auditivo/fisiologia , Córtex Auditivo/diagnóstico por imagem , Voz/fisiologia , Mapeamento Encefálico , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagemRESUMO
<b>Introduction:</b> Auditory scene analysis refers to the system through which the auditory system distinguishes distinct auditory events and sources to create meaningful auditory information. The exact number of directly perceived auditory stimuli is unknown, studies suggest it may range from 3 to 5. This number differs among individuals, and potentially may indirectly indicate the ability to store and process the complex information, related to the memory load, which is combined with human cognitive processes.<b>Aim:</b> This study aims to further identify and quantify the number of sounds that can be perceived simultaneously in a complex auditory environment.<b>Material and methods:</b> Participants were presented with structured acoustic recordings and were asked to identify the exact number of targeted stimuli heard throughout the test. The experiment was designed to assess the auditory load and determine the maximum number of auditory stimuli that a healthy human can perceive at once.<b>Results:</b> Our study showed that on average, participants could identify up to three sounds at once with accuracy of responses declining progressively for four sounds or more.<b>Conclusions:</b> This study aimed to investigate the human capacity to detect and identify multiple sound signals simultaneously in a noisy environment. By understanding this ability, we sought to assess cognitive reserve in individuals. Our objective was to determine if auditory load could serve as a diagnostic tool for cognitive evaluation. We believe that further research will establish the validity of this approach, and we anticipate that it is only a matter of time before it becomes a viable method for assessing cognitive function.
Assuntos
Percepção Auditiva , Reserva Cognitiva , Humanos , Percepção Auditiva/fisiologia , Feminino , Masculino , Adulto , Reserva Cognitiva/fisiologia , Adulto Jovem , Estimulação Acústica/métodos , Memória/fisiologiaRESUMO
Cortical processing of auditory information can be affected by interspecies differences as well as brain states. Here we compare multifeature spectro-temporal receptive fields (STRFs) and associated input/output functions or nonlinearities (NLs) of neurons in primary auditory cortex (AC) of four mammalian species. Single-unit recordings were performed in awake animals (female squirrel monkeys, female, and male mice) and anesthetized animals (female squirrel monkeys, rats, and cats). Neuronal responses were modeled as consisting of two STRFs and their associated NLs. The NLs for the STRF with the highest information content show a broad distribution between linear and quadratic forms. In awake animals, we find a higher percentage of quadratic-like NLs as opposed to more linear NLs in anesthetized animals. Moderate sex differences of the shape of NLs were observed between male and female unanesthetized mice. This indicates that the core AC possesses a rich variety of potential computations, particularly in awake animals, suggesting that multiple computational algorithms are at play to enable the auditory system's robust recognition of auditory events.
Assuntos
Córtex Auditivo , Animais , Córtex Auditivo/fisiologia , Feminino , Masculino , Gatos , Camundongos , Ratos , Estimulação Acústica/métodos , Neurônios/fisiologia , Saimiri , Percepção Auditiva/fisiologia , Especificidade da Espécie , Modelos Neurológicos , Potenciais de Ação/fisiologia , Camundongos Endogâmicos C57BLRESUMO
Listeners are sensitive to interaural time differences carried in the envelope of high-frequency sounds (ITDENV), but the salience of this cue depends on certain properties of the envelope and, in particular, on the presence/depth of amplitude modulation (AM) in the envelope. This study tested the hypothesis that individuals with sensorineural hearing loss, who show enhanced sensitivity to AM under certain conditions, would also show superior ITDENV sensitivity under those conditions. The second hypothesis was that variations in ITDENV sensitivity across individuals can be related to variations in sensitivity to AM. To enable a direct comparison, a standard adaptive AM detection task was used along with a modified version of it designed to measure ITDENV sensitivity. The stimulus was a 4-kHz tone modulated at rates of 32, 64, or 128 Hz and presented at a 30 dB sensation level. Both tasks were attempted by 16 listeners with normal hearing and 16 listeners with hearing loss. Consistent with the hypotheses, AM and ITDENV thresholds were correlated and tended to be better in listeners with hearing loss. A control experiment emphasized that absolute level may be a consideration when interpreting the group effects.
Assuntos
Estimulação Acústica , Limiar Auditivo , Sinais (Psicologia) , Perda Auditiva Neurossensorial , Humanos , Adulto , Estimulação Acústica/métodos , Pessoa de Meia-Idade , Feminino , Masculino , Perda Auditiva Neurossensorial/fisiopatologia , Perda Auditiva Neurossensorial/psicologia , Adulto Jovem , Fatores de Tempo , Estudos de Casos e Controles , Idoso , Percepção Auditiva/fisiologia , Audiometria de Tons Puros , PsicoacústicaRESUMO
Audiovisual (AV) interaction has been shown in many studies of auditory cortex. However, the underlying processes and circuits are unclear because few studies have used methods that delineate the timing and laminar distribution of net excitatory and inhibitory processes within areas, much less across cortical levels. This study examined laminar profiles of neuronal activity in auditory core (AC) and parabelt (PB) cortices recorded from macaques during active discrimination of conspecific faces and vocalizations. We found modulation of multi-unit activity (MUA) in response to isolated visual stimulation, characterized by a brief deep MUA spike, putatively in white matter, followed by mid-layer MUA suppression in core auditory cortex; the later suppressive event had clear current source density concomitants, while the earlier MUA spike did not. We observed a similar facilitation-suppression sequence in the PB, with later onset latency. In combined AV stimulation, there was moderate reduction of responses to sound during the visual-evoked MUA suppression interval in both AC and PB. These data suggest a common sequence of afferent spikes, followed by synaptic inhibition; however, differences in timing and laminar location may reflect distinct visual projections to AC and PB.
Assuntos
Córtex Auditivo , Estimulação Luminosa , Animais , Córtex Auditivo/fisiologia , Masculino , Estimulação Luminosa/métodos , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Macaca mulatta , Potenciais de Ação/fisiologia , Neurônios/fisiologia , Feminino , Vocalização Animal/fisiologiaRESUMO
Emotional responsiveness in neonates, particularly their ability to discern vocal emotions, plays an evolutionarily adaptive role in human communication and adaptive behaviors. The developmental trajectory of emotional sensitivity in neonates is crucial for understanding the foundations of early social-emotional functioning. However, the precise onset of this sensitivity and its relationship with gestational age (GA) remain subjects of investigation. In a study involving 120 healthy neonates categorized into six groups based on their GA (ranging from 35 and 40 weeks), we explored their emotional responses to vocal stimuli. These stimuli encompassed disyllables with happy and neutral prosodies, alongside acoustically matched nonvocal control sounds. The assessments occurred during natural sleep states using the odd-ball paradigm and event-related potentials. The results reveal a distinct developmental change at 37 weeks GA, marking the point at which neonates exhibit heightened perceptual acuity for emotional vocal expressions. This newfound ability is substantiated by the presence of the mismatch response, akin to an initial form of adult mismatch negativity, elicited in response to positive emotional vocal prosody. Notably, this perceptual shift's specificity becomes evident when no such discrimination is observed in acoustically matched control sounds. Neonates born before 37 weeks GA do not display this level of discrimination ability. This developmental change has important implications for our understanding of early social-emotional development, highlighting the role of gestational age in shaping early perceptual abilities. Moreover, while these findings introduce the potential for a valuable screening tool for conditions like autism, characterized by atypical social-emotional functions, it is important to note that the current data are not yet robust enough to fully support this application. This study makes a substantial contribution to the broader field of developmental neuroscience and holds promise for future research on early intervention in neurodevelopmental disorders.
Assuntos
Emoções , Idade Gestacional , Humanos , Recém-Nascido , Emoções/fisiologia , Feminino , Masculino , Potenciais Evocados/fisiologia , Estimulação Acústica , Voz/fisiologia , Percepção Auditiva/fisiologiaRESUMO
Communication in the real world is inherently multimodal. When having a conversation, typically sighted and hearing people use both auditory and visual cues to understand one another. For example, objects may make sounds as they move in space, or we may use the movement of a person's mouth to better understand what they are saying in a noisy environment. Still, many neuroscience experiments rely on unimodal stimuli to understand encoding of sensory features in the brain. The extent to which visual information may influence encoding of auditory information and vice versa in natural environments is thus unclear. Here, we addressed this question by recording scalp electroencephalography (EEG) in 11 subjects as they listened to and watched movie trailers in audiovisual (AV), visual (V) only, and audio (A) only conditions. We then fit linear encoding models that described the relationship between the brain responses and the acoustic, phonetic, and visual information in the stimuli. We also compared whether auditory and visual feature tuning was the same when stimuli were presented in the original AV format versus when visual or auditory information was removed. In these stimuli, visual and auditory information was relatively uncorrelated, and included spoken narration over a scene as well as animated or live-action characters talking with and without their face visible. For this stimulus, we found that auditory feature tuning was similar in the AV and A-only conditions, and similarly, tuning for visual information was similar when stimuli were presented with the audio present (AV) and when the audio was removed (V only). In a cross prediction analysis, we investigated whether models trained on AV data predicted responses to A or V only test data similarly to models trained on unimodal data. Overall, prediction performance using AV training and V test sets was similar to using V training and V test sets, suggesting that the auditory information has a relatively smaller effect on EEG. In contrast, prediction performance using AV training and A only test set was slightly worse than using matching A only training and A only test sets. This suggests the visual information has a stronger influence on EEG, though this makes no qualitative difference in the derived feature tuning. In effect, our results show that researchers may benefit from the richness of multimodal datasets, which can then be used to answer more than one research question.
Assuntos
Estimulação Acústica , Percepção Auditiva , Eletroencefalografia , Estimulação Luminosa , Percepção Visual , Humanos , Eletroencefalografia/métodos , Masculino , Feminino , Percepção Auditiva/fisiologia , Adulto , Percepção Visual/fisiologia , Adulto Jovem , Encéfalo/fisiologia , Modelos Neurológicos , Biologia ComputacionalRESUMO
Congenital deafness enhances responses of auditory cortices to non-auditory tasks, yet the nature of the reorganization is not well understood. Here, naturalistic stimuli are used to induce neural synchrony across early deaf and hearing individuals. Participants watch a silent animated film in an intact version and three versions with gradually distorted meaning. Differences between groups are observed in higher-order auditory cortices in all stimuli, with no statistically significant effects in the primary auditory cortex. Comparison between levels of scrambling revealed a heterogeneity of function in secondary auditory areas. Both hemispheres show greater synchrony in the deaf than in the hearing participants for the intact movie and high-level variants. However, only the right hemisphere shows an increased inter-subject synchrony in the deaf people for the low-level movie variants. An event segmentation validates these results: the dynamics of the right secondary auditory cortex in the deaf people consist of shorter-length events with more transitions than the left. Our results reveal how deaf individuals use their auditory cortex to process visual meaning.
Assuntos
Córtex Auditivo , Surdez , Percepção Visual , Humanos , Córtex Auditivo/fisiopatologia , Córtex Auditivo/fisiologia , Surdez/fisiopatologia , Surdez/congênito , Masculino , Feminino , Adulto , Adulto Jovem , Percepção Visual/fisiologia , Estimulação Luminosa , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia , Mapeamento EncefálicoRESUMO
Sensorimotor synchronization (SMS) is the temporal coordination of motor movements with external or imagined stimuli. Finger-tapping studies indicate better SMS performance with auditory or tactile stimuli compared to visual. However, SMS with a visual rhythm can be improved by enriching stimulus properties (e.g., spatiotemporal content) or individual differences (e.g., one's vividness of auditory imagery). We previously showed that higher self-reported vividness of auditory imagery led to more consistent synchronization-continuation performance when participants continued without a guiding visual rhythm. Here, we examined the contribution of imagery to the SMS performance of proficient imagers, including an auditory or visual distractor task during the continuation phase. While the visual distractor task had minimal effect, SMS consistency was significantly worse when the auditory distractor task was present. Our electroencephalography analysis revealed beat-related neural entrainment, only when the visual or auditory distractor tasks were present. During continuation with the auditory distractor task, the neural entrainment showed an occipital electrode distribution, suggesting the involvement of visual imagery. Unique to SMS continuation with the auditory distractor task, we found neural and sub-vocal (measured with electromyography) entrainment at the three-beat pattern frequency. In this most difficult condition, proficient imagers employed both beat- and pattern-related imagery strategies. However, this combination was insufficient to restore SMS consistency to that observed with visual or no distractor task. Our results suggest that proficient imagers effectively utilized beat-related imagery in one modality when imagery in another modality was limited.