RESUMO
Why are people with focal epilepsy not continuously having seizures? Previous neuronal signalling work has implicated gamma-aminobutyric acid balance as integral to seizure generation and termination, but is a high-level distributed brain network involved in suppressing seizures? Recent intracranial electrographic evidence has suggested that seizure-onset zones have increased inward connectivity that could be associated with interictal suppression of seizure activity. Accordingly, we hypothesize that seizure-onset zones are actively suppressed by the rest of the brain network during interictal states. Full testing of this hypothesis would require collaboration across multiple domains of neuroscience. We focused on partially testing this hypothesis at the electrographic network level within 81 individuals with drug-resistant focal epilepsy undergoing presurgical evaluation. We used intracranial electrographic resting-state and neurostimulation recordings to evaluate the network connectivity of seizure onset, early propagation and non-involved zones. We then used diffusion imaging to acquire estimates of white-matter connectivity to evaluate structure-function coupling effects on connectivity findings. Finally, we generated a resting-state classification model to assist clinicians in detecting seizure-onset and propagation zones without the need for multiple ictal recordings. Our findings indicate that seizure onset and early propagation zones demonstrate markedly increased inwards connectivity and decreased outwards connectivity using both resting-state (one-way ANOVA, P-value = 3.13 × 10-13) and neurostimulation analyses to evaluate evoked responses (one-way ANOVA, P-value = 2.5 × 10-3). When controlling for the distance between regions, the difference between inwards and outwards connectivity remained stable up to 80 mm between brain connections (two-way repeated measures ANOVA, group effect P-value of 2.6 × 10-12). Structure-function coupling analyses revealed that seizure-onset zones exhibit abnormally enhanced coupling (hypercoupling) of surrounding regions compared to presumably healthy tissue (two-way repeated measures ANOVA, interaction effect P-value of 9.76 × 10-21). Using these observations, our support vector classification models achieved a maximum held-out testing set accuracy of 92.0 ± 2.2% to classify early propagation and seizure-onset zones. These results suggest that seizure-onset zones are actively segregated and suppressed by a widespread brain network. Furthermore, this electrographically observed functional suppression is disproportionate to any observed structural connectivity alterations of the seizure-onset zones. These findings have implications for the identification of seizure-onset zones using only brief electrographic recordings to reduce patient morbidity and augment the presurgical evaluation of drug-resistant epilepsy. Further testing of the interictal suppression hypothesis can provide insight into potential new resective, ablative and neuromodulation approaches to improve surgical success rates in those suffering from drug-resistant focal epilepsy.
Assuntos
Epilepsia Resistente a Medicamentos , Epilepsias Parciais , Humanos , Eletroencefalografia/métodos , Convulsões , EncéfaloRESUMO
BACKGROUND: Functional near-infrared spectroscopy (fNIRS) is a viable non-invasive technique for functional neuroimaging in the cochlear implant (CI) population; however, the effects of acoustic stimulus features on the fNIRS signal have not been thoroughly examined. This study examined the effect of stimulus level on fNIRS responses in adults with normal hearing or bilateral CIs. We hypothesized that fNIRS responses would correlate with both stimulus level and subjective loudness ratings, but that the correlation would be weaker with CIs due to the compression of acoustic input to electric output. METHODS: Thirteen adults with bilateral CIs and 16 with normal hearing (NH) completed the study. Signal-correlated noise, a speech-shaped noise modulated by the temporal envelope of speech stimuli, was used to determine the effect of stimulus level in an unintelligible speech-like stimulus between the range of soft to loud speech. Cortical activity in the left hemisphere was recorded. RESULTS: Results indicated a positive correlation of cortical activation in the left superior temporal gyrus with stimulus level in both NH and CI listeners with an additional correlation between cortical activity and perceived loudness for the CI group. The results are consistent with the literature and our hypothesis. CONCLUSIONS: These results support the potential of fNIRS to examine auditory stimulus level effects at a group level and the importance of controlling for stimulus level and loudness in speech recognition studies. Further research is needed to better understand cortical activation patterns for speech recognition as a function of both stimulus presentation level and perceived loudness.
Assuntos
Córtex Auditivo , Implantes Cocleares , Percepção da Fala , Adulto , Humanos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Estimulação AcústicaRESUMO
Visual cues are especially vital for hearing impaired individuals such as cochlear implant (CI) users to understand speech in noise. Functional Near Infrared Spectroscopy (fNIRS) is a light-based imaging technology that is ideally suited for measuring the brain activity of CI users due to its compatibility with both the ferromagnetic and electrical components of these implants. In a preliminary step toward better elucidating the behavioral and neural correlates of audiovisual (AV) speech integration in CI users, we designed a speech-in-noise task and measured the extent to which 24 normal hearing individuals could integrate the audio of spoken monosyllabic words with the corresponding visual signals of a female speaker. In our behavioral task, we found that audiovisual pairings provided average improvements of 103% and 197% over auditory-alone listening conditions in -6 and -9 dB signal-to-noise ratios consisting of multi-talker background noise. In an fNIRS task using similar stimuli, we measured activity during auditory-only listening, visual-only lipreading, and AV listening conditions. We identified cortical activity in all three conditions over regions of middle and superior temporal cortex typically associated with speech processing and audiovisual integration. In addition, three channels active during the lipreading condition showed uncorrected correlations associated with behavioral measures of audiovisual gain as well as with the McGurk effect. Further work focusing primarily on the regions of interest identified in this study could test how AV speech integration may differ for CI users who rely on this mechanism for daily communication.
Assuntos
Implantes Cocleares , Percepção da Fala , Feminino , Humanos , Espectroscopia de Luz Próxima ao Infravermelho , Fala , Percepção VisualRESUMO
It has been challenging to elucidate the differences in brain structure that underlie behavioral features of autism. Prior studies have begun to identify patterns of changes in autism across multiple structural indices, including cortical thickness, local gyrification, and sulcal depth. However, common approaches to local gyrification indexing used in prior studies have been limited by low spatial resolution relative to functional brain topography. In this study, we analyze the aforementioned structural indices, utilizing a new method of local gyrification indexing that quantifies this index adaptively in relation to specific sulci/gyri, improving interpretation with respect to functional organization. Our sample included n = 115 autistic and n = 254 neurotypical participants aged 5-54, and we investigated structural patterns by group, age, and autism-related behaviors. Differing structural patterns by group emerged in many regions, with age moderating group differences particularly in frontal and limbic regions. There were also several regions, particularly in sensory areas, in which one or more of the structural indices of interest either positively or negatively covaried with autism-related behaviors. Given the advantages of this approach, future studies may benefit from its application in hypothesis-driven examinations of specific brain regions and/or longitudinal studies to assess brain development in autism.
Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Adolescente , Adulto , Transtorno do Espectro Autista/diagnóstico por imagem , Transtorno Autístico/diagnóstico por imagem , Encéfalo , Córtex Cerebral , Criança , Pré-Escolar , Humanos , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Adulto JovemRESUMO
BACKGROUND: Sensory over-responsivity (SOR) refers to excessively intense and/or prolonged behavioral responses to environmental stimuli typically perceived as non-aversive. SOR is prevalent in several neurodevelopmental disorders, including chronic tic disorders (CTDs) and obsessive-compulsive disorder (OCD). Few studies have examined the extent and clinical correlates of SOR across disorders, limiting insights into the phenomenon's transdiagnostic clinical and biological relevance. Such cross-disorder comparisons are of particular interest for CTDs and OCD given their frequent co-occurrence. OBJECTIVE: We sought to compare the magnitude of SOR between adults with CTD and adults with OCD and to identify the clinical factors most strongly associated with SOR across these disorders. METHODS: We enrolled 207 age- and sex-matched participants across four diagnostic categories: CTD without OCD (designated "CTD/OCD-"; n = 37), CTD with OCD ("CTD/OCD+"; n = 32), OCD without tic disorder ("OCD"; n = 69), and healthy controls (n = 69). Participants completed a self-report battery of rating scales assessing SOR (Sensory Gating Inventory, SGI), obsessive-compulsive symptoms (Dimensional Obsessive-Compulsive Scale, DOCS), inattention and hyperactivity (Adult ADHD Self-Report Screening Scale for DSM-5, ASRS-5), anxiety (Generalized Anxiety Disorder-7), and depression (Patient Health Questionnaire-9). CTD participants were also administered the Yale Global Tic Severity Scale (YGTSS). To examine between-group differences in SOR, we compared SGI score across all groups and between pairs of groups. To examine the relationship of SOR with other clinical factors, we performed multivariable linear regression. RESULTS: CTD/OCD-, CTD/OCD+, and OCD participants were 86.7%, 87.6%, and 89.5%, respectively, more likely to have higher SGI total scores than healthy controls. SGI total score did not differ between CTD/OCD-, CTD/OCD+, and OCD groups. In the regression model of log-transformed SGI total score, OCD diagnosis, DOCS score, and ASRS-5 score each contributed significantly to model goodness-of-fit, whereas CTD diagnosis and YGTSS total tic score did not. CONCLUSION: SOR is prevalent in adults with CTD and in adults with OCD but does not significantly differ in magnitude between these disorders. Across CTD, OCD, and healthy control adult populations, SOR is independently associated with both obsessive-compulsive and ADHD symptoms, suggesting a transdiagnostic relationship between these sensory and psychiatric manifestations. Future cross-disorder, longitudinal, and translational research is needed to clarify the role and prognostic import of SOR in CTDs, OCD, and other neurodevelopmental disorders.
Assuntos
Transtorno Obsessivo-Compulsivo , Transtornos de Tique , Adulto , Ansiedade , Comorbidade , Manual Diagnóstico e Estatístico de Transtornos Mentais , Humanos , Transtorno Obsessivo-Compulsivo/diagnóstico , Transtorno Obsessivo-Compulsivo/epidemiologia , Transtornos de Tique/diagnóstico , Transtornos de Tique/epidemiologiaRESUMO
Objects are the fundamental building blocks of how we create a representation of the external world. One major distinction among objects is between those that are animate versus those that are inanimate. In addition, many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of male and female human EEG signals, we show enhanced encoding of audiovisual objects when compared with their corresponding visual and auditory objects. Surprisingly, we discovered that the often-found processing advantages for animate objects were not evident under multisensory conditions. This was due to a greater neural enhancement of inanimate objects-which are more weakly encoded under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that the enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a Go/No-Go animate categorization task. Links between neural activity and behavioral measures were most evident at intervals of 100-200 ms and 350-500 ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize the information it captures across sensory systems to perform object recognition.SIGNIFICANCE STATEMENT Our world is filled with ever-changing sensory information that we are able to seamlessly transform into a coherent and meaningful perceptual experience. We accomplish this feat by combining different stimulus features into objects. However, despite the fact that these features span multiple senses, little is known about how the brain combines the various forms of sensory information into object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that nonliving (i.e., inanimate) objects, which are more difficult to process with one sense alone, benefited the most from engaging multiple senses.
Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto JovemRESUMO
Interactions between individuals and the environment occur within the peri-personal space (PPS). The encoding of this space plastically adapts to bodily constraints and stimuli features. However, these remapping effects have not been demonstrated on an adaptive time-scale, trial-to-trial. Here, we test this idea first via a visuo-tactile reaction time (RT) paradigm in augmented reality where participants are asked to respond as fast as possible to touch, as visual objects approach them. Results demonstrate that RTs to touch are facilitated as a function of visual proximity, and the sigmoidal function describing this facilitation shifts closer to the body if the immediately precedent trial had indexed a smaller visuo-tactile disparity. Next, we derive the electroencephalographic correlates of PPS and demonstrate that this multisensory measure is equally shaped by recent sensory history. Finally, we demonstrate that a validated neural network model of PPS is able to account for the present results via a simple Hebbian plasticity rule. The present findings suggest that PPS encoding remaps on a very rapid time-scale and, more generally, that it is sensitive to sensory history, a key feature for any process contextualizing subsequent incoming sensory information (e.g., a Bayesian prior).
Assuntos
Encéfalo/fisiologia , Modelos Neurológicos , Redes Neurais de Computação , Espaço Pessoal , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Tempo de Reação , Percepção do Tato/fisiologia , Percepção Visual/fisiologia , Adulto JovemRESUMO
During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.
Assuntos
Envelhecimento/fisiologia , Transtorno do Espectro Autista/fisiopatologia , Cognição/fisiologia , Comunicação , Dislexia/fisiopatologia , Percepção/fisiologia , Esquizofrenia/fisiopatologia , Transtornos de Sensação/fisiopatologia , HumanosRESUMO
Both the global neuronal workspace (GNW) and integrated information theory (IIT) posit that highly complex and interconnected networks engender perceptual awareness. GNW specifies that activity recruiting frontoparietal networks will elicit a subjective experience, whereas IIT is more concerned with the functional architecture of networks than with activity within it. Here, we argue that according to IIT mathematics, circuits converging on integrative versus convergent yet non-integrative neurons should support a greater degree of consciousness. We test this hypothesis by analyzing a dataset of neuronal responses collected simultaneously from primary somatosensory cortex (S1) and ventral premotor cortex (vPM) in nonhuman primates presented with auditory, tactile, and audio-tactile stimuli as they are progressively anesthetized with propofol. We first describe the multisensory (audio-tactile) characteristics of S1 and vPM neurons (mean and dispersion tendencies, as well as noise-correlations), and functionally label these neurons as convergent or integrative according to their spiking responses. Then, we characterize how these different pools of neurons behave as a function of consciousness. At odds with the IIT mathematics, results suggest that convergent neurons more readily exhibit properties of consciousness (neural complexity and noise correlation) and are more impacted during the loss of consciousness than integrative neurons. Last, we provide support for the GNW by showing that neural ignition (i.e., same trial coactivation of S1 and vPM) was more frequent in conscious than unconscious states. Overall, we contrast GNW and IIT within the same single-unit activity dataset, and support the GNW.SIGNIFICANCE STATEMENT A number of prominent theories of consciousness exist, and a number of these share strong commonalities, such as the central role they ascribe to integration. Despite the important and far reaching consequences developing a better understanding of consciousness promises to bring, for instance in diagnosing disorders of consciousness (e.g., coma, vegetative-state, locked-in syndrome), these theories are seldom tested via invasive techniques (with high signal-to-noise ratios), and never directly confronted within a single dataset. Here, we first derive concrete and testable predictions from the global neuronal workspace and integrated information theory of consciousness. Then, we put these to the test by functionally labeling specific neurons as either convergent or integrative nodes, and examining the response of these neurons during anesthetic-induced loss of consciousness.
Assuntos
Estado de Consciência/fisiologia , Modelos Neurológicos , Modelos Teóricos , Vias Neurais/fisiologia , Neurônios/fisiologia , Animais , Macaca mulatta , MasculinoRESUMO
Multisensory processes include the capacity to combine information from the different senses, often improving stimulus representations and behavior. The extent to which multisensory processes are an innate capacity or instead require experience with environmental stimuli remains debated. We addressed this knowledge gap by studying multisensory processes in prematurely born and full-term infants. We recorded 128-channel event-related potentials (ERPs) from a cohort of 55 full-term and 61 preterm neonates (at an equivalent gestational age) in response to auditory, somatosensory, and combined auditory-somatosensory multisensory stimuli. Data were analyzed within an electrical neuroimaging framework, involving unsupervised topographic clustering of the ERP data. Multisensory processing in full-term infants was characterized by a simple linear summation of responses to auditory and somatosensory stimuli alone, which furthermore shared common ERP topographic features. We refer to the ERP topography observed in full-term infants as "typical infantile processing" (TIP). In stark contrast, preterm infants exhibited non-linear responses and topographies less-often characterized by TIP; there were distinct patterns of ERP topographies to multisensory and summed unisensory conditions. We further observed that the better TIP characterized an infant's ERPs, independently of prematurity, the more typical was the score on the Infant/Toddler Sensory Profile (ITSP) at 12 months of age and the less likely was the child to the show internalizing tendencies at 24 months of age. Collectively, these results highlight striking differences in the brain's responses to multisensory stimuli in children born prematurely; differences that relate to later sensory and internalizing functions.
Assuntos
Potenciais Evocados , Recém-Nascido Prematuro , Sensação , Criança , Pré-Escolar , Feminino , Humanos , Recém-Nascido , MasculinoRESUMO
The human brain retains a striking degree of plasticity into adulthood. Recent studies have demonstrated that a short period of altered visual experience (via monocular deprivation) can change the dynamics of binocular rivalry in favor of the deprived eye, a compensatory action thought to be mediated by an upregulation of cortical gain control mechanisms. Here, we sought to better understand the impact of monocular deprivation on multisensory abilities, specifically examining audiovisual temporal perception. Using an audiovisual simultaneity judgment task, we discovered that 90 minutes of monocular deprivation produced opposing effects on the temporal binding window depending on the eye used in the task. Thus, in those who performed the task with their deprived eye there was a narrowing of the temporal binding window, whereas in those performing the task with their nondeprived eye there was a widening of the temporal binding window. The effect was short lived, being observed only in the first 10 minutes of postdeprivation testing. These findings indicate that changes in visual experience in the adult can rapidly impact multisensory perceptual processes, a finding that has important clinical implications for those patients with adult-onset visual deprivation and for therapies founded on monocular deprivation.
Assuntos
Percepção Auditiva/fisiologia , Privação Sensorial/fisiologia , Percepção do Tempo/fisiologia , Visão Monocular/fisiologia , Percepção Visual/fisiologia , Adolescente , Feminino , Humanos , Masculino , Adulto JovemRESUMO
Most evidence on the neural and perceptual correlates of sensory processing derives from studies that have focused on only a single sensory modality and averaged the data from groups of participants. Although valuable, such studies ignore the substantial interindividual and intraindividual differences that are undoubtedly at play. Such variability plays an integral role in both the behavioral/perceptual realms and in the neural correlates of these processes, but substantially less is known when compared with group-averaged data. Recently, it has been shown that the presentation of stimuli from two or more sensory modalities (i.e., multisensory stimulation) not only results in the well-established performance gains but also gives rise to reductions in behavioral and neural response variability. To better understand the relationship between neural and behavioral response variability under multisensory conditions, this study investigated both behavior and brain activity in a task requiring participants to discriminate moving versus static stimuli presented in either a unisensory or multisensory context. EEG data were analyzed with respect to intraindividual and interindividual differences in RTs. The results showed that trial-by-trial variability of RTs was significantly reduced under audiovisual presentation conditions as compared with visual-only presentations across all participants. Intraindividual variability of RTs was linked to changes in correlated activity between clusters within an occipital to frontal network. In addition, interindividual variability of RTs was linked to differential recruitment of medial frontal cortices. The present findings highlight differences in the brain networks that support behavioral benefits during unisensory versus multisensory motion detection and provide an important view into the functional dynamics within neuronal networks underpinning intraindividual performance differences.
Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Discriminação Psicológica/fisiologia , Potenciais Evocados/fisiologia , Percepção de Movimento/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Individualidade , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Adulto JovemRESUMO
The actionable space surrounding the body, referred to as peripersonal space (PPS), has been the subject of significant interest of late within the broader framework of embodied cognition. Neurophysiological and neuroimaging studies have shown the representation of PPS to be built from visuotactile and audiotactile neurons within a frontoparietal network and whose activity is modulated by the presence of stimuli in proximity to the body. In contrast to single-unit and fMRI studies, an area of inquiry that has received little attention is the EEG characterization associated with PPS processing. Furthermore, although PPS is encoded by multisensory neurons, to date there has been no EEG study systematically examining neural responses to unisensory and multisensory stimuli, as these are presented outside, near, and within the boundary of PPS. Similarly, it remains poorly understood whether multisensory integration is generally more likely at certain spatial locations (e.g., near the body) or whether the cross-modal tactile facilitation that occurs within PPS is simply due to a reduction in the distance between sensory stimuli when close to the body and in line with the spatial principle of multisensory integration. In the current study, to examine the neural dynamics of multisensory processing within and beyond the PPS boundary, we present auditory, visual, and audiovisual stimuli at various distances relative to participants' reaching limit-an approximation of PPS-while recording continuous high-density EEG. We question whether multisensory (vs. unisensory) processing varies as a function of stimulus-observer distance. Results demonstrate a significant increase of global field power (i.e., overall strength of response across the entire electrode montage) for stimuli presented at the PPS boundary-an increase that is largest under multisensory (i.e., audiovisual) conditions. Source localization of the major contributors to this global field power difference suggests neural generators in the intraparietal sulcus and insular cortex, hubs for visuotactile and audiotactile PPS processing. Furthermore, when neural dynamics are examined in more detail, changes in the reliability of evoked potentials in centroparietal electrodes are predictive on a subject-by-subject basis of the later changes in estimated current strength at the intraparietal sulcus linked to stimulus proximity to the PPS boundary. Together, these results provide a previously unrealized view into the neural dynamics and temporal code associated with the encoding of nontactile multisensory around the PPS boundary.
Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Potenciais Evocados/fisiologia , Espaço Pessoal , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino , Lobo Parietal/fisiologia , Adulto JovemRESUMO
Multisensory integration of visual mouth movements with auditory speech is known to offer substantial perceptual benefits, particularly under challenging (i.e., noisy) acoustic conditions. Previous work characterizing this process has found that ERPs to auditory speech are of shorter latency and smaller magnitude in the presence of visual speech. We sought to determine the dependency of these effects on the temporal relationship between the auditory and visual speech streams using EEG. We found that reductions in ERP latency and suppression of ERP amplitude are maximal when the visual signal precedes the auditory signal by a small interval and that increasing amounts of asynchrony reduce these effects in a continuous manner. Time-frequency analysis revealed that these effects are found primarily in the theta (4-8 Hz) and alpha (8-12 Hz) bands, with a central topography consistent with auditory generators. Theta effects also persisted in the lower portion of the band (3.5-5 Hz), and this late activity was more frontally distributed. Importantly, the magnitude of these late theta oscillations not only differed with the temporal characteristics of the stimuli but also served to predict participants' task performance. Our analysis thus reveals that suppression of single-trial brain responses by visual speech depends strongly on the temporal concordance of the auditory and visual inputs. It further illustrates that processes in the lower theta band, which we suggest as an index of incongruity processing, might serve to reflect the neural correlates of individual differences in multisensory temporal perception.
Assuntos
Encéfalo/fisiologia , Percepção de Movimento/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Percepção da Fala/fisiologia , Ritmo alfa , Potenciais Evocados , Feminino , Humanos , Masculino , Psicofísica , Ritmo Teta , Fatores de Tempo , Adulto JovemRESUMO
The neural underpinnings of perceptual awareness have been extensively studied using unisensory (e.g., visual alone) stimuli. However, perception is generally multisensory, and it is unclear whether the neural architecture uncovered in these studies directly translates to the multisensory domain. Here, we use EEG to examine brain responses associated with the processing of visual, auditory, and audiovisual stimuli presented near threshold levels of detectability, with the aim of deciphering similarities and differences in the neural signals indexing the transition into perceptual awareness across vision, audition, and combined visual-auditory (multisensory) processing. More specifically, we examine (1) the presence of late evoked potentials (â¼>300 msec), (2) the across-trial reproducibility, and (3) the evoked complexity associated with perceived versus nonperceived stimuli. Results reveal that, although perceived stimuli are associated with the presence of late evoked potentials across each of the examined sensory modalities, between-trial variability and EEG complexity differed for unisensory versus multisensory conditions. Whereas across-trial variability and complexity differed for perceived versus nonperceived stimuli in the visual and auditory conditions, this was not the case for the multisensory condition. Taken together, these results suggest that there are fundamental differences in the neural correlates of perceptual awareness for unisensory versus multisensory stimuli. Specifically, the work argues that the presence of late evoked potentials, as opposed to neural reproducibility or complexity, most closely tracks perceptual awareness regardless of the nature of the sensory stimulus. In addition, the current findings suggest a greater similarity between the neural correlates of perceptual awareness of unisensory (visual and auditory) stimuli when compared with multisensory stimuli.
Assuntos
Percepção Auditiva/fisiologia , Conscientização/fisiologia , Encéfalo/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adulto , Eletroencefalografia , Potenciais Evocados Auditivos , Potenciais Evocados Visuais , Feminino , Humanos , Masculino , Estimulação Luminosa , Psicofísica , Tempo de Reação , Adulto JovemRESUMO
Binding across sensory modalities yields substantial perceptual benefits, including enhanced speech intelligibility. The coincidence of sensory inputs across time is a fundamental cue for this integration process. Recent work has suggested that individuals with diagnoses of schizophrenia (SZ) and autism spectrum disorder (ASD) will characterize auditory and visual events as synchronous over larger temporal disparities than their neurotypical counterparts. Namely, these clinical populations possess an enlarged temporal binding window (TBW). Although patients with SZ and ASD share aspects of their symptomatology, phenotypic similarities may result from distinct etiologies. To examine similarities and variances in audiovisual temporal function in these two populations, individuals diagnosed with ASD (n = 46; controls n = 40) and SZ (n = 16, controls = 16) completed an audiovisual simultaneity judgment task. In addition to standard psychometric analyses, synchrony judgments were assessed using Bayesian causal inference modeling. This approach permits distinguishing between distinct causes of an enlarged TBW: an a priori bias to bind sensory information and poor fidelity in the sensory representation. Findings indicate that both ASD and SZ populations show deficits in multisensory temporal acuity. Importantly, results suggest that while the wider TBWs in ASD most prominently results from atypical priors, the wider TBWs in SZ results from a trend toward changes in prior and weaknesses in the sensory representations. Results are discussed in light of current ASD and SZ theories and highlight that different perceptual training paradigms focused on improving multisensory integration may be most effective in these two clinical populations and emphasize that similar phenotypes may emanate from distinct mechanistic causes.
Assuntos
Transtorno do Espectro Autista/fisiopatologia , Reconhecimento Visual de Modelos/fisiologia , Transtornos da Percepção/fisiopatologia , Esquizofrenia/fisiopatologia , Percepção da Fala/fisiologia , Percepção do Tempo/fisiologia , Adolescente , Adulto , Transtorno do Espectro Autista/complicações , Criança , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Transtornos da Percepção/etiologia , Fenótipo , Esquizofrenia/complicaçõesRESUMO
The temporal relationship between auditory and visual cues is a fundamental feature in the determination of whether these signals will be integrated. The window of perceived simultaneity (TBW) is a construct that describes the epoch of time during which asynchronous auditory and visual stimuli are likely to be perceptually bound. Recently, a number of studies have demonstrated the capacity for perceptual training to enhance temporal acuity for audiovisual stimuli (i.e., narrow the TBW). These studies, however, have only examined multisensory perceptual learning that develops in response to feedback that is provided when making judgments on simple, low-level audiovisual stimuli (i.e., flashes and beeps). Here we sought to determine if perceptual training was capable of altering temporal acuity for audiovisual speech. Furthermore, we also explored whether perceptual training with simple or complex audiovisual stimuli generalized across levels of stimulus complexity. Using a simultaneity judgment (SJ) task, we measured individuals' temporal acuity (as estimated by the TBW) prior to, immediately following, and one week after four consecutive days of perceptual training. We report that temporal acuity for audiovisual speech stimuli is enhanced following perceptual training using speech stimuli. Additionally, we find that changes in temporal acuity following perceptual training do not generalize across the levels of stimulus complexity in this study. Overall, the results suggest that perceptual training is capable of enhancing temporal acuity for audiovisual speech in adults, and that the dynamics of the changes in temporal acuity following perceptual training differ between simple audiovisual stimuli and more complex audiovisual speech stimuli.
Assuntos
Generalização Psicológica/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Prática Psicológica , Percepção da Fala/fisiologia , Percepção do Tempo/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto JovemRESUMO
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
Assuntos
Percepção Auditiva/fisiologia , Percepção de Distância/fisiologia , Julgamento/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Análise de Variância , Correlação de Dados , Feminino , Humanos , Masculino , Estimulação Luminosa , Psicofísica , Tempo de Reação , Fatores de Tempo , Adulto JovemRESUMO
Audible alarms are a ubiquitous feature of all high-paced, high-risk domains such as aviation and nuclear power where operators control complex systems. In such settings, a missed alarm can have disastrous consequences. It is conventional wisdom that for alarms to be heard, "louder is better," so that alarm levels in operational environments routinely exceed ambient noise levels. Through a robust experimental paradigm in an anechoic environment to study human response to audible alerting stimuli in a cognitively demanding setting, akin to high-tempo and high-risk domains, clinician participants responded to patient crises while concurrently completing an auditory speech intelligibility and visual vigilance distracting task as the level of alarms were varied as a signal-to-noise ratio above and below hospital background noise. There was little difference in performance on the primary task when the alarm sound was -11 dB below background noise as compared with +4 dB above background noise-a typical real-world situation. Concurrent presentation of the secondary auditory speech intelligibility task significantly degraded performance. Operator performance can be maintained with alarms that are softer than background noise. These findings have widespread implications for the design and implementation of alarms across all high-consequence settings.
Assuntos
Estimulação Acústica/instrumentação , Acústica , Alarmes Clínicos , Percepção Sonora , Ruído , Médicos/psicologia , Adulto , Limiar Auditivo , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Mascaramento Perceptivo , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Acústica da Fala , Inteligibilidade da Fala , Análise e Desempenho de Tarefas , Percepção Visual , Qualidade da VozRESUMO
Temporal structure is ubiquitous in sensory signals, and the brain has been shown to robustly represent information about temporal structure in the phase of low frequency neural oscillations. In a related construct, the integration of information across the different senses has been proposed to be at least partly due to the phase resetting of these low frequency oscillations. As a consequence, oscillations represent a potential contributor to the encoding of complex multisensory signals with informative temporal structures. Here we investigated these interactions using electroencephalography (EEG). We entrained low frequency (3 Hz) delta oscillations using a repetitive auditory stimulus-broadband amplitude modulated noise. Following entrainment, we presented auditory and audiovisual stimuli at variable delays. We examined whether the power of oscillations at the entrained frequency was dependent on the delay (and thus, potentially, phase) at which subsequent stimulation was delivered, and whether this relationship was different for subsequent multisensory (i.e., audiovisual) stimuli when compared with auditory stimuli alone. Our findings demonstrate that, when the subsequent stimuli are solely auditory, the power of oscillations at the entrained frequency is rhythmically modulated by when the stimulus was delivered. For audiovisual stimuli, however, no such dependency is present, yielding consistent power modulations. These effects indicate that reciprocal oscillatory mechanisms may be involved in the continuous encoding of complex temporally structured multisensory inputs such as speech.