Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 117
Filtrar
1.
Psychon Bull Rev ; 2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39105938

RESUMEN

We investigated the contribution of multisensory predictions to body ownership, and beyond, to the integration of body-related signals. Contrary to the prevailing idea, according to which, to be integrated, cues necessarily have to be perceived simultaneously, we instead proposed the prediction-confirmation account. According to this account, a perceived cue can be integrated with a predicted cue as long as both signals are relatively simultaneous. To test this hypothesis, a standard rubber hand illusion (RHI) paradigm was used. In the first part of each trial, the illusion was induced while participants observed the rubber hand being touched with a paintbrush. In the subsequent part of the trial, (i) both rubber hand and the participant's real hand were stroked as before (i.e., visible/synchronous condition), (ii) the rubber hand was not stroke anymore (i.e., visible/tactile-only condition), or (iii) both rubber hand and the participant's real hand were synchronously stroked while the location where the rubber hand was touched was occulted (i.e., occulted/synchronous condition). However, in this latter condition, participants still perceived the approaching movement of the paintbrush. Thus, based on this visual cue, the participants can properly predict the timepoint at which the tactile cue should occur (i.e., visuotactile predictions). Our major finding was that compared with the visible/tactile-only condition, the occulted/synchronous condition did not exhibit a decrease of the RHI as in the visible/synchronous condition. This finding supports the prediction-confirmation account and suggests that this mechanism operates even in the standard version of the RHI.

2.
Front Psychol ; 15: 1396946, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39091706

RESUMEN

Introduction: The prevailing theories of consciousness consider the integration of different sensory stimuli as a key component for this phenomenon to rise on the brain level. Despite many theories and models have been proposed for multisensory integration between supraliminal stimuli (e.g., the optimal integration model), we do not know if multisensory integration occurs also for subliminal stimuli and what psychophysical mechanisms it follows. Methods: To investigate this, subjects were exposed to visual (Virtual Reality) and/or haptic stimuli (Electro-Cutaneous Stimulation) above or below their perceptual threshold. They had to discriminate, in a two-Alternative Forced Choice Task, the intensity of unimodal and/or bimodal stimuli. They were then asked to discriminate the sensory modality while recording their EEG responses. Results: We found evidence of multisensory integration for supraliminal condition, following the classical optimal model. Importantly, even for subliminal trials participant's performances in the bimodal condition were significantly more accurate when discriminating the intensity of the stimulation. Moreover, significant differences emerged between unimodal and bimodal activity templates in parieto-temporal areas known for their integrative role. Discussion: These converging evidences - even if preliminary and needing confirmation from the collection of further data - suggest that subliminal multimodal stimuli can be integrated, thus filling a meaningful gap in the debate about the relationship between consciousness and multisensory integration.

3.
Genome Biol ; 25(1): 198, 2024 Jul 29.
Artículo en Inglés | MEDLINE | ID: mdl-39075536

RESUMEN

Single-cell multi-omics data reveal complex cellular states, providing significant insights into cellular dynamics and disease. Yet, integration of multi-omics data presents challenges. Some modalities have not reached the robustness or clarity of established transcriptomics. Coupled with data scarcity for less established modalities and integration intricacies, these challenges limit our ability to maximize single-cell omics benefits. We introduce scCross, a tool leveraging variational autoencoders, generative adversarial networks, and the mutual nearest neighbors (MNN) technique for modality alignment. By enabling single-cell cross-modal data generation, multi-omics data simulation, and in silico cellular perturbations, scCross enhances the utility of single-cell multi-omics studies.


Asunto(s)
Análisis de la Célula Individual , Análisis de la Célula Individual/métodos , Humanos , Simulación por Computador , Genómica/métodos , Programas Informáticos , Biología Computacional/métodos , Multiómica
4.
Nano Lett ; 24(23): 7091-7099, 2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38804877

RESUMEN

Multimodal perception can capture more precise and comprehensive information compared with unimodal approaches. However, current sensory systems typically merge multimodal signals at computing terminals following parallel processing and transmission, which results in the potential loss of spatial association information and requires time stamps to maintain temporal coherence for time-series data. Here we demonstrate bioinspired in-sensor multimodal fusion, which effectively enhances comprehensive perception and reduces the level of data transfer between sensory terminal and computation units. By adopting floating gate phototransistors with reconfigurable photoresponse plasticity, we realize the agile spatial and spatiotemporal fusion under nonvolatile and volatile photoresponse modes. To realize an optimal spatial estimation, we integrate spatial information from visual-tactile signals. For dynamic events, we capture and fuse in real time spatiotemporal information from visual-audio signals, realizing a dance-music synchronization recognition task without a time-stamping process. This in-sensor multimodal fusion approach provides the potential to simplify the multimodal integration system, extending the in-sensor computing paradigm.

5.
JMIR Med Inform ; 12: e48862, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38557661

RESUMEN

BACKGROUND: Triage is the process of accurately assessing patients' symptoms and providing them with proper clinical treatment in the emergency department (ED). While many countries have developed their triage process to stratify patients' clinical severity and thus distribute medical resources, there are still some limitations of the current triage process. Since the triage level is mainly identified by experienced nurses based on a mix of subjective and objective criteria, mis-triage often occurs in the ED. It can not only cause adverse effects on patients, but also impose an undue burden on the health care delivery system. OBJECTIVE: Our study aimed to design a prediction system based on triage information, including demographics, vital signs, and chief complaints. The proposed system can not only handle heterogeneous data, including tabular data and free-text data, but also provide interpretability for better acceptance by the ED staff in the hospital. METHODS: In this study, we proposed a system comprising 3 subsystems, with each of them handling a single task, including triage level prediction, hospitalization prediction, and length of stay prediction. We used a large amount of retrospective data to pretrain the model, and then, we fine-tuned the model on a prospective data set with a golden label. The proposed deep learning framework was built with TabNet and MacBERT (Chinese version of bidirectional encoder representations from transformers [BERT]). RESULTS: The performance of our proposed model was evaluated on data collected from the National Taiwan University Hospital (901 patients were included). The model achieved promising results on the collected data set, with accuracy values of 63%, 82%, and 71% for triage level prediction, hospitalization prediction, and length of stay prediction, respectively. CONCLUSIONS: Our system improved the prediction of 3 different medical outcomes when compared with other machine learning methods. With the pretrained vital sign encoder and repretrained mask language modeling MacBERT encoder, our multimodality model can provide a deeper insight into the characteristics of electronic health records. Additionally, by providing interpretability, we believe that the proposed system can assist nursing staff and physicians in taking appropriate medical decisions.

6.
Front Psychol ; 15: 1345906, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38596333

RESUMEN

Introduction: Temporal co-ordination between speech and gestures has been thoroughly studied in natural production. In most cases gesture strokes precede or coincide with the stressed syllable in words that they are semantically associated with. Methods: To understand whether processing of speech and gestures is attuned to such temporal coordination, we investigated the effect of delaying, preposing or eliminating individual gestures on the memory for words in an experimental study in which 83 participants watched video sequences of naturalistic 3D-animated speakers generated based on motion capture data. A target word in the sequence appeared (a) with a gesture presented in its original position synchronized with speech, (b) temporally shifted 500 ms before or (c) after the original position, or (d) with the gesture eliminated. Participants were asked to retell the videos in a free recall task. The strength of recall was operationalized as the inclusion of the target word in the free recall. Results: Both eliminated and delayed gesture strokes resulted in reduced recall rates compared to synchronized strokes, whereas there was no difference between advanced (preposed) and synchronized strokes. An item-level analysis also showed that the greater the interval between the onsets of delayed strokes and stressed syllables in target words, the greater the negative effect was on recall. Discussion: These results indicate that speech-gesture synchrony affects memory for speech, and that temporal patterns that are common in production lead to the best recall. Importantly, the study also showcases a procedure for using motion capture-based 3D-animated speakers to create an experimental paradigm for the study of speech-gesture comprehension.

7.
bioRxiv ; 2024 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-38464242

RESUMEN

Recent experimental developments enable single-cell multimodal epigenomic profiling, which measures multiple histone modifications and chromatin accessibility within the same cell. Such parallel measurements provide exciting new opportunities to investigate how epigenomic modalities vary together across cell types and states. A pivotal step in using this type of data is integrating the epigenomic modalities to learn a unified representation of each cell, but existing approaches are not designed to model the unique nature of this data type. Our key insight is to model single-cell multimodal epigenome data as a multi-channel sequential signal. Based on this insight, we developed ConvNet-VAEs, a novel framework that uses 1D-convolutional variational autoencoders (VAEs) for single-cell multimodal epigenomic data integration. We evaluated ConvNet-VAEs on nano-CT and scNTT-seq data generated from juvenile mouse brain and human bone marrow. We found that ConvNet-VAEs can perform dimension reduction and batch correction better than previous architectures while using significantly fewer parameters. Furthermore, the performance gap between convolutional and fully-connected architectures increases with the number of modalities, and deeper convolutional architectures can increase performance while performance degrades for deeper fully-connected architectures. Our results indicate that convolutional autoencoders are a promising method for integrating current and future single-cell multimodal epigenomic datasets.

8.
Behav Processes ; 216: 105008, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38373472

RESUMEN

Emotional contagion, a fundamental aspect of empathy, is an automatic and unconscious process in which individuals mimic and synchronize with the emotions of others. Extensively studied in rodents, this phenomenon is mediated through a range of sensory pathways, each contributing distinct insights. The olfactory pathway, marked by two types of pheromones modulated by oxytocin, plays a crucial role in transmitting emotional states. The auditory pathway, involving both squeaks and specific ultrasonic vocalizations, correlates with various emotional states and is essential for expression and communication in rodents. The visual pathway, though less relied upon, encompasses observational motions and facial expressions. The tactile pathway, a more recent focus, underscores the significance of physical interactions such as allogrooming and socio-affective touch in modulating emotional states. This comprehensive review not only highlights plausible neural mechanisms but also poses key questions for future research. It underscores the complexity of multimodal integration in emotional contagion, offering valuable insights for human psychology, neuroscience, animal welfare, and the burgeoning field of animal-human-AI interactions, thereby contributing to the development of a more empathetic intelligent future.


Asunto(s)
Emociones , Roedores , Animales , Humanos , Empatía , Expresión Facial , Oxitocina
9.
J Neural Eng ; 21(1)2024 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-38290158

RESUMEN

Objective. This study presents a novel methodological approach for incorporating information related to the peripheral sympathetic response into the investigation of neural dynamics. Particularly, we explore how hedonic contextual olfactory stimuli influence the processing of neutral faces in terms of sympathetic response, event-related potentials and effective connectivity analysis. The objective is to investigate how the emotional valence of odors influences the cortical connectivity underlying face processing and the role of face-induced sympathetic arousal in this visual-olfactory multimodal integration.Approach. To this aim, we combine electrodermal activity (EDA) analysis and dynamic causal modeling to examine changes in cortico-cortical interactions.Results. The results reveal that stimuli arising sympathetic EDA responses are associated with a more negative N170 amplitude, which may be a marker of heightened arousal in response to faces. Hedonic odors, on the other hand, lead to a more negative N1 component and a reduced the vertex positive potential when they are unpleasant or pleasant. Concerning connectivity, unpleasant odors strengthen the forward connection from the inferior temporal gyrus (ITG) to the middle temporal gyrus, which is involved in processing changeable facial features. Conversely, the occurrence of sympathetic responses after a stimulus is correlated with an inhibition of this same connection and an enhancement of the backward connection from ITG to the fusiform face gyrus.Significance. These findings suggest that unpleasant odors may enhance the interpretation of emotional expressions and mental states, while faces capable of eliciting sympathetic arousal prioritize identity processing.


Asunto(s)
Reconocimiento Facial , Odorantes , Reconocimiento Facial/fisiología , Respuesta Galvánica de la Piel , Emociones/fisiología , Potenciales Evocados/fisiología , Expresión Facial , Electroencefalografía
10.
J Exp Biol ; 227(1)2024 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-38180228

RESUMEN

The integration of sensory information is required to maintain body posture and to generate robust yet flexible locomotion through unpredictable environments. To anticipate required adaptations in limb posture and enable compensation of sudden perturbations, an animal's nervous system assembles external (exteroception) and internal (proprioception) cues. Coherent neuronal representations of the proprioceptive context of the body and the appendages arise from the concerted action of multiple sense organs monitoring body kinetics and kinematics. This multimodal proprioceptive information, together with exteroceptive signals and brain-derived descending motor commands, converges onto premotor networks - i.e. the local neuronal circuitry controlling motor output and movements - within the ventral nerve cord (VNC), the insect equivalent of the vertebrate spinal cord. This Review summarizes existing knowledge and recent advances in understanding how local premotor networks in the VNC use convergent information to generate contextually appropriate activity, focusing on the example of posture control. We compare the role and advantages of distributed sensory processing over dedicated neuronal pathways, and the challenges of multimodal integration in distributed networks. We discuss how the gain of distributed networks may be tuned to enable the behavioral repertoire of these systems, and argue that insect premotor networks might compensate for their limited neuronal population size by, in comparison to vertebrate networks, relying more heavily on the specificity of their connections. At a time in which connectomics and physiological recording techniques enable anatomical and functional circuit dissection at an unprecedented resolution, insect motor systems offer unique opportunities to identify the mechanisms underlying multimodal integration for flexible motor control.


Asunto(s)
Equilibrio Postural , Propiocepción , Animales , Encéfalo , Señales (Psicología) , Locomoción
11.
Adv Exp Med Biol ; 1437: 37-58, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38270852

RESUMEN

We experience the world by constantly integrating cues from multiple modalities to form unified sensory percepts. Once familiar with multimodal properties of an object, we can recognize it regardless of the modality involved. In this chapter we will examine the case of a visual-tactile orientation categorization experiment in rats. We will explore the involvement of the cerebral cortex in recognizing objects through multiple sensory modalities. In the orientation categorization task, rats learned to examine and judge the orientation of a raised, black and white grating using touch, vision, or both. Their multisensory performance was better than the predictions of linear models for cue combination, indicating synergy between the two sensory channels. Neural recordings made from a candidate associative cortical area, the posterior parietal cortex (PPC), reflected the principal neuronal correlates of the behavioral results: PPC neurons encoded both graded information about the object and categorical information about the animal's decision. Intriguingly single neurons showed identical responses under each of the three modality conditions providing a substrate for a neural circuit in the cortex that is involved in modality-invariant processing of objects.


Asunto(s)
Corteza Cerebral , Tacto , Animales , Ratas , Aprendizaje , Modelos Lineales , Neuronas
12.
Natl Sci Rev ; 11(1): nwad294, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38288367

RESUMEN

To investigate the circuit-level neural mechanisms of behavior, simultaneous imaging of neuronal activity in multiple cortical and subcortical regions is highly desired. Miniature head-mounted microscopes offer the capability of calcium imaging in freely behaving animals. However, implanting multiple microscopes on a mouse brain remains challenging due to space constraints and the cumbersome weight of the equipment. Here, we present TINIscope, a Tightly Integrated Neuronal Imaging microscope optimized for electronic and opto-mechanical design. With its compact and lightweight design of 0.43 g, TINIscope enables unprecedented simultaneous imaging of behavior-relevant activity in up to four brain regions in mice. Proof-of-concept experiments with TINIscope recorded over 1000 neurons in four hippocampal subregions and revealed concurrent activity patterns spanning across these regions. Moreover, we explored potential multi-modal experimental designs by integrating additional modules for optogenetics, electrical stimulation or local field potential recordings. Overall, TINIscope represents a timely and indispensable tool for studying the brain-wide interregional coordination that underlies unrestrained behaviors.

13.
Trends Cell Biol ; 34(2): 85-89, 2024 02.
Artículo en Inglés | MEDLINE | ID: mdl-38087709

RESUMEN

Artificial intelligence (AI) is widely used for exploiting multimodal biomedical data, with increasingly accurate predictions and model-agnostic interpretations, which are however also agnostic to biological mechanisms. Combining metabolic modelling, 'omics, and imaging data via multimodal AI can generate predictions that can be interpreted mechanistically and transparently, therefore with significantly higher therapeutic potential.


Asunto(s)
Inteligencia Artificial , Multiómica , Modelos Biológicos
14.
Front Bioinform ; 3: 1275402, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37928169

RESUMEN

Introduction: Tissue-based sampling and diagnosis are defined as the extraction of information from certain limited spaces and its diagnostic significance of a certain object. Pathologists deal with issues related to tumor heterogeneity since analyzing a single sample does not necessarily capture a representative depiction of cancer, and a tissue biopsy usually only presents a small fraction of the tumor. Many multiplex tissue imaging platforms (MTIs) make the assumption that tissue microarrays (TMAs) containing small core samples of 2-dimensional (2D) tissue sections are a good approximation of bulk tumors although tumors are not 2D. However, emerging whole slide imaging (WSI) or 3D tumor atlases that use MTIs like cyclic immunofluorescence (CyCIF) strongly challenge this assumption. In spite of the additional insight gathered by measuring the tumor microenvironment in WSI or 3D, it can be prohibitively expensive and time-consuming to process tens or hundreds of tissue sections with CyCIF. Even when resources are not limited, the criteria for region of interest (ROI) selection in tissues for downstream analysis remain largely qualitative and subjective as stratified sampling requires the knowledge of objects and evaluates their features. Despite the fact TMAs fail to adequately approximate whole tissue features, a theoretical subsampling of tissue exists that can best represent the tumor in the whole slide image. Methods: To address these challenges, we propose deep learning approaches to learn multi-modal image translation tasks from two aspects: 1) generative modeling approach to reconstruct 3D CyCIF representation and 2) co-embedding CyCIF image and Hematoxylin and Eosin (H&E) section to learn multi-modal mappings by a cross-domain translation for minimum representative ROI selection. Results and discussion: We demonstrate that generative modeling enables a 3D virtual CyCIF reconstruction of a colorectal cancer specimen given a small subset of the imaging data at training time. By co-embedding histology and MTI features, we propose a simple convex optimization for objective ROI selection. We demonstrate the potential application of ROI selection and the efficiency of its performance with respect to cellular heterogeneity.

15.
Front Physiol ; 14: 1257465, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37929207

RESUMEN

To obtain accurate information about the outside world and to make appropriate decisions, animals often combine information from different sensory pathways to form a comprehensive representation of their environment. This process of multimodal integration is poorly understood, but it is common view that the single elements of a multimodal stimulus influence each other's perception by enhancing or suppressing their neural representation. The neuronal level of interference might be manifold, for instance, an enhancement might increase, whereas suppression might decrease behavioural response times. In order to investigate this in an insect behavioural model, the Western honeybee, we trained individual bees to associate a sugar reward with an odour, a light, or a combined olfactory-visual stimulus, using the proboscis extension response (PER). We precisely monitored the PER latency (the time between stimulus onset and the first response of the proboscis) by recording the muscle M17, which innervates the proboscis. We found that odours evoked a fast response, whereas visual stimuli elicited a delayed PER. Interestingly, the combined stimulus showed a response time in between the unimodal stimuli, suggesting that olfactory-visual integration accelerates visual responses but decelerates the olfactory response time.

16.
Bioessays ; 45(12): e2300095, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37800564

RESUMEN

Autonomous sensory meridian response (ASMR) and affective touch (AT) are two phenomena that have been independently investigated from separate lines of research. In this article, I provide a unified theoretical framework for understanding and studying them as complementary processes. I highlight their shared biological basis and positive effects on emotional and psychophysiological regulation. Drawing from evolutionary and developmental theories, I propose that ASMR results from the development of biological mechanisms associated with early affiliative behaviour and self-regulation, similar to AT. I also propose a multimodal interoceptive mechanism underlying both phenomena, suggesting that different sensory systems could specifically respond to affective stimulation (caresses, whispers and affective faces), where the integration of those inputs occurs in the brain's interoceptive hubs, allowing physiological regulation. The implications of this proposal are discussed with a view to future research that jointly examines ASMR and AT, and their potential impact on improving emotional well-being and mental health.


Asunto(s)
Meridianos , Tacto , Tacto/fisiología , Emociones
17.
J Pathol ; 261(3): 349-360, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37667855

RESUMEN

As predictive biomarkers of response to immune checkpoint inhibitors (ICIs) remain a major unmet clinical need in patients with urothelial carcinoma (UC), we sought to identify tissue-based immune biomarkers of clinical benefit to ICIs using multiplex immunofluorescence and to integrate these findings with previously identified peripheral blood biomarkers of response. Fifty-five pretreatment and 12 paired on-treatment UC specimens were identified from patients treated with nivolumab with or without ipilimumab. Whole tissue sections were stained with a 12-plex mIF panel, including CD8, PD-1/CD279, PD-L1/CD274, CD68, CD3, CD4, FoxP3, TCF1/7, Ki67, LAG-3, MHC-II/HLA-DR, and pancytokeratin+SOX10 to identify over three million cells. Immune tissue densities were compared to progression-free survival (PFS) and best overall response (BOR) by RECIST version 1.1. Correlation coefficients were calculated between tissue-based and circulating immune populations. The frequency of intratumoral CD3+ LAG-3+ cells was higher in responders compared to nonresponders (p = 0.0001). LAG-3+ cellular aggregates were associated with response, including CD3+ LAG-3+ in proximity to CD3+ (p = 0.01). Exploratory multivariate modeling showed an association between intratumoral CD3+ LAG-3+ cells and improved PFS independent of prognostic clinical factors (log HR -7.0; 95% confidence interval [CI] -12.7 to -1.4), as well as established biomarkers predictive of ICI response (log HR -5.0; 95% CI -9.8 to -0.2). Intratumoral LAG-3+ immune cell populations warrant further study as a predictive biomarker of clinical benefit to ICIs. Differences in LAG-3+ lymphocyte populations across the intratumoral and peripheral compartments may provide complementary information that could inform the future development of multimodal composite biomarkers of ICI response. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.

18.
Cell ; 186(20): 4422-4437.e21, 2023 09 28.
Artículo en Inglés | MEDLINE | ID: mdl-37774680

RESUMEN

Recent work has identified dozens of non-coding loci for Alzheimer's disease (AD) risk, but their mechanisms and AD transcriptional regulatory circuitry are poorly understood. Here, we profile epigenomic and transcriptomic landscapes of 850,000 nuclei from prefrontal cortexes of 92 individuals with and without AD to build a map of the brain regulome, including epigenomic profiles, transcriptional regulators, co-accessibility modules, and peak-to-gene links in a cell-type-specific manner. We develop methods for multimodal integration and detecting regulatory modules using peak-to-gene linking. We show AD risk loci are enriched in microglial enhancers and for specific TFs including SPI1, ELF2, and RUNX1. We detect 9,628 cell-type-specific ATAC-QTL loci, which we integrate alongside peak-to-gene links to prioritize AD variant regulatory circuits. We report differential accessibility of regulatory modules in late AD in glia and in early AD in neurons. Strikingly, late-stage AD brains show global epigenome dysregulation indicative of epigenome erosion and cell identity loss.


Asunto(s)
Enfermedad de Alzheimer , Encéfalo , Regulación de la Expresión Génica , Humanos , Enfermedad de Alzheimer/genética , Enfermedad de Alzheimer/patología , Encéfalo/patología , Epigenoma , Epigenómica , Estudio de Asociación del Genoma Completo
19.
Curr Opin Neurobiol ; 81: 102748, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37453230

RESUMEN

The brain's evolution and operation are inextricably linked to animal movement, and critical functions, such as motor control, spatial perception, and navigation, rely on precise knowledge of body movement. Such internal estimates of self-motion emerge from the integration of mechanosensory and visual feedback with motor-related signals. Thus, this internal representation likely depends on the activity of circuits distributed across the central nervous system. However, the circuits responsible for self-motion estimation, and the exact mechanisms by which motor-sensory coordination occurs within these circuits remain poorly understood. Recent technological advances have positioned Drosophila melanogaster as an advantageous model for investigating the emergence, maintenance, and utilization of self-motion representations during naturalistic walking behaviors. In this review, I will illustrate how the adult fly is providing insights into the fundamental problems of self-motion computations and walking control, which have relevance for all animals.


Asunto(s)
Drosophila , Percepción de Movimiento , Animales , Drosophila melanogaster/fisiología , Caminata , Percepción de Movimiento/fisiología , Movimiento
20.
Anat Sci Int ; 98(4): 473-481, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37340095

RESUMEN

Recent evidence has shown that the precuneus plays a role in the pathogenesis of schizophrenia. The precuneus is a structure of the parietal lobe's medial and posterior cortex, representing a central hub involved in multimodal integration processes. Although neglected for several years, the precuneus is highly complex and crucial for multimodal integration. It has extensive connections with different cerebral areas and is an interface between external stimuli and internal representations. In human evolution, the precuneus has increased in size and complexity, allowing the development of higher cognitive functions, such as visual-spatial ability, mental imagery, episodic memory, and other tasks involved in emotional processing and mentalization. This paper reviews the functions of the precuneus and discusses them concerning the psychopathological aspects of schizophrenia. The different neuronal circuits, such as the default mode network (DMN), in which the precuneus is involved and its alterations in the structure (grey matter) and the disconnection of pathways (white matter) are described.


Asunto(s)
Imagen por Resonancia Magnética , Esquizofrenia , Humanos , Mapeo Encefálico , Esquizofrenia/patología , Lóbulo Parietal/patología , Lóbulo Parietal/fisiología , Corteza Cerebral , Vías Nerviosas/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA