Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 877
Filtrar
Mais filtros

País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Annu Rev Neurosci ; 45: 471-489, 2022 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-35803589

RESUMO

Unimodal sensory loss leads to structural and functional changes in both deprived and nondeprived brain circuits. This process is broadly known as cross-modal plasticity. The evidence available indicates that cross-modal changes underlie the enhanced performances of the spared sensory modalities in deprived subjects. Sensory experience is a fundamental driver of cross-modal plasticity, yet there is evidence from early-visually deprived models supporting an additional role for experience-independent factors. These experience-independent factors are expected to act early in development and constrain neuronal plasticity at later stages. Here we review the cross-modal adaptations elicited by congenital or induced visual deprivation prior to vision. In most of these studies, cross-modal adaptations have been addressed at the structural and functional levels. Here, we also appraise recent data regarding behavioral performance in early-visually deprived models. However, further research is needed to explore how circuit reorganization affects their function and what brings about enhanced behavioral performance.


Assuntos
Plasticidade Neuronal , Privação Sensorial , Encéfalo , Humanos , Plasticidade Neuronal/fisiologia , Privação Sensorial/fisiologia , Visão Ocular
2.
Proc Natl Acad Sci U S A ; 120(49): e2310156120, 2023 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-38015842

RESUMO

Motion perception is a fundamental sensory task that plays a critical evolutionary role. In vision, motion processing is classically described using a motion energy model with spatiotemporally nonseparable filters suited for capturing the smooth continuous changes in spatial position over time afforded by moving objects. However, it is still not clear whether the filters underlying auditory motion discrimination are also continuous motion detectors or infer motion from comparing discrete sound locations over time (spatiotemporally separable). We used a psychophysical reverse correlation paradigm, where participants discriminated the direction of a motion signal in the presence of spatiotemporal noise, to determine whether the filters underlying auditory motion discrimination were spatiotemporally separable or nonseparable. We then examined whether these auditory motion filters were altered as a result of early blindness. We found that both sighted and early blind individuals have separable filters. However, early blind individuals show increased sensitivity to auditory motion, with reduced susceptibility to noise and filters that were more accurate in detecting motion onsets/offsets. Model simulations suggest that this reliance on separable filters is optimal given the limited spatial resolution of auditory input.


Assuntos
Percepção de Movimento , Pessoas com Deficiência Visual , Humanos , Visão Ocular , Cegueira , Percepção Auditiva , Estimulação Acústica
3.
J Neurosci ; 44(7)2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38129133

RESUMO

Neuroimaging studies suggest cross-sensory visual influences in human auditory cortices (ACs). Whether these influences reflect active visual processing in human ACs, which drives neuronal firing and concurrent broadband high-frequency activity (BHFA; >70 Hz), or whether they merely modulate sound processing is still debatable. Here, we presented auditory, visual, and audiovisual stimuli to 16 participants (7 women, 9 men) with stereo-EEG depth electrodes implanted near ACs for presurgical monitoring. Anatomically normalized group analyses were facilitated by inverse modeling of intracranial source currents. Analyses of intracranial event-related potentials (iERPs) suggested cross-sensory responses to visual stimuli in ACs, which lagged the earliest auditory responses by several tens of milliseconds. Visual stimuli also modulated the phase of intrinsic low-frequency oscillations and triggered 15-30 Hz event-related desynchronization in ACs. However, BHFA, a putative correlate of neuronal firing, was not significantly increased in ACs after visual stimuli, not even when they coincided with auditory stimuli. Intracranial recordings demonstrate cross-sensory modulations, but no indication of active visual processing in human ACs.


Assuntos
Córtex Auditivo , Masculino , Humanos , Feminino , Córtex Auditivo/fisiologia , Estimulação Acústica/métodos , Potenciais Evocados/fisiologia , Eletroencefalografia/métodos , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Estimulação Luminosa
4.
Brief Bioinform ; 24(6)2023 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-37779248

RESUMO

Antimicrobial peptides (AMPs) are promising candidates for the development of new antibiotics due to their broad-spectrum activity against a range of pathogens. However, identifying AMPs through a huge bunch of candidates is challenging due to their complex structures and diverse sequences. In this study, we propose SenseXAMP, a cross-modal framework that leverages semantic embeddings of and protein descriptors (PDs) of input sequences to improve the identification performance of AMPs. SenseXAMP includes a multi-input alignment module and cross-representation fusion module to explore the hidden information between the two input features and better leverage the fusion feature. To better address the AMPs identification task, we accumulate the latest annotated AMPs data to form more generous benchmark datasets. Additionally, we expand the existing AMPs identification task settings by adding an AMPs regression task to meet more specific requirements like antimicrobial activity prediction. The experimental results indicated that SenseXAMP outperformed existing state-of-the-art models on multiple AMP-related datasets including commonly used AMPs classification datasets and our proposed benchmark datasets. Furthermore, we conducted a series of experiments to demonstrate the complementary nature of traditional PDs and protein pre-training models in AMPs tasks. Our experiments reveal that SenseXAMP can effectively combine the advantages of PDs to improve the performance of protein pre-training models in AMPs tasks.


Assuntos
Peptídeos Catiônicos Antimicrobianos , Peptídeos Antimicrobianos , Antibacterianos
5.
Cereb Cortex ; 34(6)2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38879756

RESUMO

Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.


Assuntos
Estimulação Acústica , Percepção Auditiva , Encéfalo , Estimulação Luminosa , Percepção Visual , Animais , Gatos , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Estimulação Luminosa/métodos , Encéfalo/fisiologia , Encéfalo/crescimento & desenvolvimento , Masculino , Feminino , Comportamento Animal/fisiologia
6.
Cereb Cortex ; 34(2)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38212286

RESUMO

Interference from task-irrelevant stimuli can occur during the semantic and response processing stages. Previous studies have shown both common and distinct mechanisms underlying semantic conflict processing and response conflict processing in the visual domain. However, it remains unclear whether common and/or distinct mechanisms are involved in semantic conflict processing and response conflict processing in the cross-modal domain. Therefore, the present electroencephalography study adopted an audiovisual 2-1 mapping Stroop task to investigate whether common and/or distinct mechanisms underlie semantic conflict and response conflict. Behaviorally, significant cross-modal semantic conflict and significant cross-modal response conflict were observed. Electroencephalography results revealed that the frontal N2 amplitude and theta power increased only in the semantic conflict condition, while the parietal N450 amplitude increased only in the response conflict condition. These findings indicated that distinct neural mechanisms were involved in cross-modal semantic conflict and response conflict processing, supporting the domain-specific cognitive control mechanisms from a cross-modal multistage conflict processing perspective.


Assuntos
Encéfalo , Semântica , Encéfalo/fisiologia , Tempo de Reação/fisiologia , Eletroencefalografia , Teste de Stroop
7.
Cereb Cortex ; 34(2)2024 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-38314581

RESUMO

Neural circuits support behavioral adaptations by integrating sensory and motor information with reward and error-driven learning signals, but it remains poorly understood how these signals are distributed across different levels of the corticohippocampal hierarchy. We trained rats on a multisensory object-recognition task and compared visual and tactile responses of simultaneously recorded neuronal ensembles in somatosensory cortex, secondary visual cortex, perirhinal cortex, and hippocampus. The sensory regions primarily represented unisensory information, whereas hippocampus was modulated by both vision and touch. Surprisingly, the sensory cortices and the hippocampus coded object-specific information, whereas the perirhinal cortex did not. Instead, perirhinal cortical neurons signaled trial outcome upon reward-based feedback. A majority of outcome-related perirhinal cells responded to a negative outcome (reward omission), whereas a minority of other cells coded positive outcome (reward delivery). Our results highlight a distributed neural coding of multisensory variables in the cortico-hippocampal hierarchy. Notably, the perirhinal cortex emerges as a crucial region for conveying motivational outcomes, whereas distinct functions related to object identity are observed in the sensory cortices and hippocampus.


Assuntos
Córtex Perirrinal , Ratos , Animais , Hipocampo/fisiologia , Percepção Visual/fisiologia , Lobo Parietal , Recompensa
8.
Cereb Cortex ; 34(3)2024 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-38517179

RESUMO

The mechanisms of semantic conflict and response conflict in the Stroop task have mainly been investigated in the visual modality. However, the understanding of these mechanisms in cross-modal modalities remains limited. In this electroencephalography (EEG) study, an audiovisual 2-1 mapping Stroop task was utilized to investigate whether distinct and/or common neural mechanisms underlie cross-modal semantic conflict and response conflict. The response time data showed significant effects on both cross-modal semantic and response conflicts. Interestingly, the magnitude of semantic conflict was found to be smaller in the fast response time bins than in the slow response time bins, whereas no such difference was observed for response conflict. The EEG data demonstrated that cross-modal semantic conflict specifically increased the N450 amplitude. However, cross-modal response conflict specifically enhanced theta band power and theta phase synchronization between the medial frontal cortex (MFC) and lateral prefrontal electrodes as well as between the MFC and motor electrodes. In addition, both cross-modal semantic conflict and response conflict led to a decrease in P3 amplitude. Taken together, these findings provide cross-modal evidence for domain-specific mechanism in conflict detection and suggest both domain-specific and domain-general mechanisms exist in conflict resolution.


Assuntos
Eletroencefalografia , Semântica , Mapeamento Encefálico , Lobo Frontal/fisiologia , Tempo de Reação/fisiologia
9.
J Neurosci ; 43(27): 4984-4996, 2023 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-37197979

RESUMO

It has been postulated that the brain is organized by "metamodal," sensory-independent cortical modules capable of performing tasks (e.g., word recognition) in both "standard" and novel sensory modalities. Still, this theory has primarily been tested in sensory-deprived individuals, with mixed evidence in neurotypical subjects, thereby limiting its support as a general principle of brain organization. Critically, current theories of metamodal processing do not specify requirements for successful metamodal processing at the level of neural representations. Specification at this level may be particularly important in neurotypical individuals, where novel sensory modalities must interface with existing representations for the standard sense. Here we hypothesized that effective metamodal engagement of a cortical area requires congruence between stimulus representations in the standard and novel sensory modalities in that region. To test this, we first used fMRI to identify bilateral auditory speech representations. We then trained 20 human participants (12 female) to recognize vibrotactile versions of auditory words using one of two auditory-to-vibrotactile algorithms. The vocoded algorithm attempted to match the encoding scheme of auditory speech while the token-based algorithm did not. Crucially, using fMRI, we found that only in the vocoded group did trained-vibrotactile stimuli recruit speech representations in the superior temporal gyrus and lead to increased coupling between them and somatosensory areas. Our results advance our understanding of brain organization by providing new insight into unlocking the metamodal potential of the brain, thereby benefitting the design of novel sensory substitution devices that aim to tap into existing processing streams in the brain.SIGNIFICANCE STATEMENT It has been proposed that the brain is organized by "metamodal," sensory-independent modules specialized for performing certain tasks. This idea has inspired therapeutic applications, such as sensory substitution devices, for example, enabling blind individuals "to see" by transforming visual input into soundscapes. Yet, other studies have failed to demonstrate metamodal engagement. Here, we tested the hypothesis that metamodal engagement in neurotypical individuals requires matching the encoding schemes between stimuli from the novel and standard sensory modalities. We trained two groups of subjects to recognize words generated by one of two auditory-to-vibrotactile transformations. Critically, only vibrotactile stimuli that were matched to the neural encoding of auditory speech engaged auditory speech areas after training. This suggests that matching encoding schemes is critical to unlocking the brain's metamodal potential.


Assuntos
Córtex Auditivo , Percepção da Fala , Humanos , Feminino , Fala , Percepção Auditiva , Encéfalo , Lobo Temporal , Imageamento por Ressonância Magnética/métodos , Estimulação Acústica/métodos
10.
J Neurosci ; 43(6): 1018-1026, 2023 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-36604169

RESUMO

Hemianopia (unilateral blindness), a common consequence of stroke and trauma to visual cortex, is a debilitating disorder for which there are few treatments. Research in an animal model has suggested that visual-auditory stimulation therapy, which exploits the multisensory architecture of the brain, may be effective in restoring visual sensitivity in hemianopia. It was tested in two male human patients who were hemianopic for at least 8 months following a stroke. The patients were repeatedly exposed to congruent visual-auditory stimuli within their blinded hemifield during 2 h sessions over several weeks. The results were dramatic. Both recovered the ability to detect and describe visual stimuli throughout their formerly blind field within a few weeks. They could also localize these stimuli, identify some of their features, and perceive multiple visual stimuli simultaneously in both fields. These results indicate that the multisensory therapy is a rapid and effective method for restoring visual function in hemianopia.SIGNIFICANCE STATEMENT Hemianopia (blindness on one side of space) is widely considered to be a permanent disorder. Here, we show that a simple multisensory training paradigm can ameliorate this disorder in human patients.


Assuntos
Hemianopsia , Acidente Vascular Cerebral , Animais , Humanos , Masculino , Hemianopsia/terapia , Percepção Visual/fisiologia , Visão Ocular , Encéfalo , Estimulação Luminosa/métodos , Cegueira/terapia
11.
BMC Bioinformatics ; 25(1): 41, 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38267858

RESUMO

BACKGROUND: With the development of single-cell technology, many cell traits can be measured. Furthermore, the multi-omics profiling technology could jointly measure two or more traits in a single cell simultaneously. In order to process the various data accumulated rapidly, computational methods for multimodal data integration are needed. RESULTS: Here, we present inClust+, a deep generative framework for the multi-omics. It's built on previous inClust that is specific for transcriptome data, and augmented with two mask modules designed for multimodal data processing: an input-mask module in front of the encoder and an output-mask module behind the decoder. InClust+ was first used to integrate scRNA-seq and MERFISH data from similar cell populations, and to impute MERFISH data based on scRNA-seq data. Then, inClust+ was shown to have the capability to integrate the multimodal data (e.g. tri-modal data with gene expression, chromatin accessibility and protein abundance) with batch effect. Finally, inClust+ was used to integrate an unlabeled monomodal scRNA-seq dataset and two labeled multimodal CITE-seq datasets, transfer labels from CITE-seq datasets to scRNA-seq dataset, and generate the missing modality of protein abundance in monomodal scRNA-seq data. In the above examples, the performance of inClust+ is better than or comparable to the most recent tools in the corresponding task. CONCLUSIONS: The inClust+ is a suitable framework for handling multimodal data. Meanwhile, the successful implementation of mask in inClust+ means that it can be applied to other deep learning methods with similar encoder-decoder architecture to broaden the application scope of these models.


Assuntos
Cromatina , Transcriptoma , Fenótipo
12.
Neuroimage ; : 120720, 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38971484

RESUMO

This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior temporal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.

13.
J Neurophysiol ; 131(4): 723-737, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38416720

RESUMO

The brain engages the processes of multisensory integration and recalibration to deal with discrepant multisensory signals. These processes consider the reliability of each sensory input, with the more reliable modality receiving the stronger weight. Sensory reliability is typically assessed via the variability of participants' judgments, yet these can be shaped by factors both external and internal to the nervous system. For example, motor noise and participant's dexterity with the specific response method contribute to judgment variability, and different response methods applied to the same stimuli can result in different estimates of sensory reliabilities. Here we ask how such variations in reliability induced by variations in the response method affect multisensory integration and sensory recalibration, as well as motor adaptation, in a visuomotor paradigm. Participants performed center-out hand movements and were asked to judge the position of the hand or rotated visual feedback at the movement end points. We manipulated the variability, and thus the reliability, of repeated judgments by asking participants to respond using either a visual or a proprioceptive matching procedure. We find that the relative weights of visual and proprioceptive signals, and thus the asymmetry of multisensory integration and recalibration, depend on the reliability modulated by the judgment method. Motor adaptation, in contrast, was insensitive to this manipulation. Hence, the outcome of multisensory binding is shaped by the noise introduced by sensorimotor processing, in line with perception and action being intertwined.NEW & NOTEWORTHY Our brain tends to combine multisensory signals based on their respective reliability. This reliability depends on sensory noise in the environment, noise in the nervous system, and, as we show here, variability induced by the specific judgment procedure.


Assuntos
Julgamento , Percepção Visual , Humanos , Julgamento/fisiologia , Percepção Visual/fisiologia , Reprodutibilidade dos Testes , Mãos/fisiologia , Movimento/fisiologia , Propriocepção/fisiologia
14.
Eur J Neurosci ; 59(7): 1770-1788, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38230578

RESUMO

Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.


Assuntos
Percepção Auditiva , Localização de Som , Estimulação Acústica , Estimulação Luminosa , Percepção Visual , Humanos
15.
Eur J Neurosci ; 59(10): 2596-2615, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38441248

RESUMO

Auditory deprivation following congenital/pre-lingual deafness (C/PD) can drastically affect brain development and its functional organisation. This systematic review intends to extend current knowledge of the impact of C/PD and deafness duration on brain resting-state networks (RSNs), review changes in RSNs and spoken language outcomes post-cochlear implant (CI) and draw conclusions for future research. The systematic literature search followed the PRISMA guideline. Two independent reviewers searched four electronic databases using combined keywords: 'auditory deprivation', 'congenital/prelingual deafness', 'resting-state functional connectivity' (RSFC), 'resting-state fMRI' and 'cochlear implant'. Seventeen studies (16 cross-sectional and one longitudinal) met the inclusion criteria. Using the Crowe Critical Appraisal Tool, the publications' quality was rated between 65.0% and 92.5% (mean: 84.10%), ≥80% in 13 out of 17 studies. A few studies were deficient in sampling and/or ethical considerations. According to the findings, early auditory deprivation results in enhanced RSFC between the auditory network and brain networks involved in non-verbal communication, and high levels of spontaneous neural activity in the auditory cortex before CI are evidence of occupied auditory cortical areas with other sensory modalities (cross-modal plasticity) and sub-optimal CI outcomes. Overall, current evidence supports the idea that moreover intramodal and cross-modal plasticity, the entire brain adaptation following auditory deprivation contributes to spoken language development and compensatory behaviours.


Assuntos
Implante Coclear , Surdez , Humanos , Surdez/fisiopatologia , Implante Coclear/métodos , Encéfalo/fisiopatologia , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Rede Nervosa/fisiopatologia , Rede Nervosa/diagnóstico por imagem , Imageamento por Ressonância Magnética , Córtex Auditivo/fisiopatologia , Córtex Auditivo/diagnóstico por imagem , Implantes Cocleares , Resultado do Tratamento
16.
Brief Bioinform ; 23(2)2022 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-35224614

RESUMO

Accurate identification of drug-target interactions (DTIs) plays a crucial role in drug discovery. Compared with traditional experimental methods that are labor-intensive and time-consuming, computational methods are more and more popular in recent years. Conventional computational methods almost simply view heterogeneous networks which integrate diverse drug-related and target-related dataset instead of fully exploring drug and target similarities. In this paper, we propose a new method, named DTIHNC, for $\mathbf{D}$rug-$\mathbf{T}$arget $\mathbf{I}$nteraction identification, which integrates $\mathbf{H}$eterogeneous $\mathbf{N}$etworks and $\mathbf{C}$ross-modal similarities calculated by relations between drugs, proteins, diseases and side effects. Firstly, the low-dimensional features of drugs, proteins, diseases and side effects are obtained from original features by a denoising autoencoder. Then, we construct a heterogeneous network across drug, protein, disease and side-effect nodes. In heterogeneous network, we exploit the heterogeneous graph attention operations to update the embedding of a node based on information in its 1-hop neighbors, and for multi-hop neighbor information, we propose random walk with restart aware graph attention to integrate more information through a larger neighborhood region. Next, we calculate cross-modal drug and protein similarities from cross-scale relations between drugs, proteins, diseases and side effects. Finally, a multiple-layer convolutional neural network deeply integrates similarity information of drugs and proteins with the embedding features obtained from heterogeneous graph attention network. Experiments have demonstrated its effectiveness and better performance than state-of-the-art methods. Datasets and a stand-alone package are provided on Github with website https://github.com/ningq669/DTIHNC.


Assuntos
Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Redes Neurais de Computação , Descoberta de Drogas , Interações Medicamentosas , Humanos , Proteínas/metabolismo
17.
Biol Lett ; 20(4): 20240025, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38565149

RESUMO

If a congenitally blind person learns to distinguish between a cube and a sphere by touch, would they immediately recognize these objects by sight once their vision is restored? This question, posed by Molyneux in 1688, has puzzled philosophers and scientists since then. To overcome ethical and practical difficulties in the investigation of cross-modal recognition, we studied inexperienced poultry chicks, which can be reared in darkness until the moment of a visual test with no detrimental consequences. After hatching chicks in darkness, we exposed them to either tactile smooth or tactile bumpy stimuli for 24 h. Immediately after the tactile exposure, chicks were tested in a visual recognition task, during their first experience with light. At first sight, chicks that had been exposed in the tactile modality to smooth stimuli approached the visual smooth stimulus significantly more than those exposed to the tactile bumpy stimuli. These results show that visually inexperienced chicks can solve Molyneux's problem, indicating cross-modal recognition does not require previous multimodal experience. At least in this precocial species, supra-modal brain areas appear functional already at birth. This discovery paves the way for the investigation of predisposed cross-modal cognition that does not depend on visual experience.


Assuntos
Reconhecimento Psicológico , Tato , Cognição , Galinhas , Animais
18.
Exp Brain Res ; 242(3): 599-618, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38227008

RESUMO

The ability to inhibit an already initiated response is crucial for navigating the environment. However, it is unclear which characteristics make stop-signals more likely to be processed efficiently. In three consecutive studies, we demonstrate that stop-signal modality and location are key factors that influence reactive response inhibition. Study 1 shows that tactile stop-signals lead to better performance compared to visual stop-signals in an otherwise visual choice-reaction task. Results of Study 2 reveal that the location of the stop-signal matters. Specifically, if a visual stop-signal is presented at a different location compared to the visual go-signal, then stopping performance is enhanced. Extending these results, study 3 suggests that tactile stop-signals and location-distinct visual stop-signals retain their performance enhancing effect when visual distractors are presented at the location of the go-signal. In sum, these results confirm that stop-signal modality and location influence reactive response inhibition, even in the face of concurrent distractors. Future research may extend and generalize these findings to other cross-modal setups.


Assuntos
Atenção , Inibição Psicológica , Humanos , Tempo de Reação/fisiologia , Atenção/fisiologia , Desempenho Psicomotor/fisiologia
19.
Cereb Cortex ; 33(4): 948-958, 2023 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-35332919

RESUMO

Concordant visual-auditory stimuli enhance the responses of individual superior colliculus (SC) neurons. This neuronal capacity for "multisensory integration" is not innate: it is acquired only after substantial cross-modal (e.g. auditory-visual) experience. Masking transient auditory cues by raising animals in omnidirectional sound ("noise-rearing") precludes their ability to obtain this experience and the ability of the SC to construct a normal multisensory (auditory-visual) transform. SC responses to combinations of concordant visual-auditory stimuli are depressed, rather than enhanced. The present experiments examined the behavioral consequence of this rearing condition in a simple detection/localization task. In the first experiment, the auditory component of the concordant cross-modal pair was novel, and only the visual stimulus was a target. In the second experiment, both component stimuli were targets. Noise-reared animals failed to show multisensory performance benefits in either experiment. These results reveal a close parallel between behavior and single neuron physiology in the multisensory deficits that are induced when noise disrupts early visual-auditory experience.


Assuntos
Percepção Auditiva , Ruído , Animais , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Estimulação Luminosa/métodos , Neurônios/fisiologia , Colículos Superiores/fisiologia , Percepção Visual/fisiologia
20.
Cereb Cortex ; 33(15): 9280-9290, 2023 07 24.
Artigo em Inglês | MEDLINE | ID: mdl-37280751

RESUMO

Shape processing, whether by seeing or touching, is pivotal to object recognition and manipulation. Although the low-level signals are initially processed by different modality-specific neural circuits, multimodal responses to object shapes have been reported along both ventral and dorsal visual pathways. To understand this transitional process, we conducted visual and haptic shape perception fMRI experiments to test basic shape features (i.e. curvature and rectilinear) across the visual pathways. Using a combination of region-of-interest-based support vector machine decoding analysis and voxel selection method, we found that the top visual-discriminative voxels in the left occipital cortex (OC) could also classify haptic shape features, and the top haptic-discriminative voxels in the left posterior parietal cortex (PPC) could also classify visual shape features. Furthermore, these voxels could decode shape features in a cross-modal manner, suggesting shared neural computation across visual and haptic modalities. In the univariate analysis, the top haptic-discriminative voxels in the left PPC showed haptic rectilinear feature preference, whereas the top visual-discriminative voxels in the left OC showed no significant shape feature preference in either of the two modalities. Together, these results suggest that mid-level shape features are represented in a modality-independent manner in both the ventral and dorsal streams.


Assuntos
Reconhecimento Visual de Modelos , Percepção Visual , Reconhecimento Visual de Modelos/fisiologia , Percepção Visual/fisiologia , Lobo Occipital/diagnóstico por imagem , Tato/fisiologia , Lobo Parietal , Imageamento por Ressonância Magnética/métodos , Mapeamento Encefálico
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA