Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 1.652
Filtrar
Más filtros

Tipo del documento
Publication year range
1.
Mol Cell ; 81(7): 1425-1438.e10, 2021 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-33662272

RESUMEN

Eukaryotic elongation factor 2 (eEF2) mediates translocation of peptidyl-tRNA from the ribosomal A site to the P site to promote translational elongation. Its phosphorylation on Thr56 by its single known kinase eEF2K inactivates it and inhibits translational elongation. Extensive studies have revealed that different signal cascades modulate eEF2K activity, but whether additional factors regulate phosphorylation of eEF2 remains unclear. Here, we find that the X chromosome-linked intellectual disability protein polyglutamine-binding protein 1 (PQBP1) specifically binds to non-phosphorylated eEF2 and suppresses eEF2K-mediated phosphorylation at Thr56. Loss of PQBP1 significantly reduces general protein synthesis by suppressing translational elongation. Moreover, we show that PQBP1 regulates hippocampal metabotropic glutamate receptor-dependent long-term depression (mGluR-LTD) and mGluR-LTD-associated behaviors by suppressing eEF2K-mediated phosphorylation. Our results identify PQBP1 as a novel regulator in translational elongation and mGluR-LTD, and this newly revealed regulator in the eEF2K/eEF2 pathway is also an excellent therapeutic target for various disease conditions, such as neural diseases, virus infection, and cancer.


Asunto(s)
Proteínas de Unión al ADN/metabolismo , Hipocampo/metabolismo , Depresión Sináptica a Largo Plazo , Extensión de la Cadena Peptídica de Translación , Factor 2 de Elongación Peptídica/metabolismo , Receptores de Glutamato Metabotrópico/biosíntesis , Animales , Proteínas de Unión al ADN/genética , Células HEK293 , Células HeLa , Humanos , Ratones , Ratones Noqueados , Factor 2 de Elongación Peptídica/genética , Fosforilación , Receptores de Glutamato Metabotrópico/genética
2.
J Neurosci ; 44(24)2024 Jun 12.
Artículo en Inglés | MEDLINE | ID: mdl-38670806

RESUMEN

Visual crowding refers to the phenomenon where a target object that is easily identifiable in isolation becomes difficult to recognize when surrounded by other stimuli (distractors). Many psychophysical studies have investigated this phenomenon and proposed alternative models for the underlying mechanisms. One prominent hypothesis, albeit with mixed psychophysical support, posits that crowding arises from the loss of information due to pooled encoding of features from target and distractor stimuli in the early stages of cortical visual processing. However, neurophysiological studies have not rigorously tested this hypothesis. We studied the responses of single neurons in macaque (one male, one female) area V4, an intermediate stage of the object-processing pathway, to parametrically designed crowded displays and texture statistics-matched metameric counterparts. Our investigations reveal striking parallels between how crowding parameters-number, distance, and position of distractors-influence human psychophysical performance and V4 shape selectivity. Importantly, we also found that enhancing the salience of a target stimulus could alleviate crowding effects in highly cluttered scenes, and this could be temporally protracted reflecting a dynamical process. Thus, a pooled encoding of nearby stimuli cannot explain the observed responses, and we propose an alternative model where V4 neurons preferentially encode salient stimuli in crowded displays. Overall, we conclude that the magnitude of crowding effects is determined not just by the number of distractors and target-distractor separation but also by the relative salience of targets versus distractors based on their feature attributes-the similarity of distractors and the contrast between target and distractor stimuli.


Asunto(s)
Macaca mulatta , Neuronas , Estimulación Luminosa , Corteza Visual , Animales , Masculino , Femenino , Corteza Visual/fisiología , Estimulación Luminosa/métodos , Neuronas/fisiología , Humanos , Reconocimiento Visual de Modelos/fisiología , Psicofísica
3.
J Neurosci ; 44(36)2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39138001

RESUMEN

Acetylation of histone proteins by histone acetyltransferases (HATs), and the resultant change in gene expression, is a well-established mechanism necessary for long-term memory (LTM) consolidation, which is not required for short-term memory (STM). However, we previously demonstrated that the HAT p300/CBP-associated factor (PCAF) also influences hippocampus (HPC)-dependent STM in male rats. In addition to their epigenetic activity, HATs acetylate nonhistone proteins involved in nongenomic cellular processes, such as estrogen receptors (ERs). Given that ERs have rapid, nongenomic effects on HPC-dependent STM, we investigated the potential interaction between ERs and PCAF for STM mediated by the dorsal hippocampus (dHPC). Using a series of pharmacological agents administered directly into the dHPC, we reveal a functional interaction between PCAF and ERα in the facilitation of short-term object-in-place memory in male but not female rats. This interaction was specific to ERα, while ERß agonism did not enhance STM. It was further specific to dHPC STM, as the effect was not present in the dHPC for LTM or in the perirhinal cortex. Further, while STM required local (i.e., dHPC) estrogen synthesis, the facilitatory interaction effect appeared independent of estrogens. Finally, western blot analyses demonstrated that PCAF activation in the dHPC rapidly (5 min) activated downstream estrogen-related cell signaling kinases (c-Jun N-terminal kinase and extracellular signal-related kinase). Collectively, these findings indicate that PCAF, which is typically implicated in LTM through epigenetic processes, also influences STM in the dHPC, possibly via nongenomic ER activity. Critically, this novel PCAF-ER interaction might exist as a male-specific mechanism supporting STM.


Asunto(s)
Receptor alfa de Estrógeno , Hipocampo , Memoria a Corto Plazo , Factores de Transcripción p300-CBP , Animales , Masculino , Femenino , Ratas , Receptor alfa de Estrógeno/metabolismo , Receptor alfa de Estrógeno/genética , Factores de Transcripción p300-CBP/metabolismo , Factores de Transcripción p300-CBP/genética , Hipocampo/metabolismo , Hipocampo/efectos de los fármacos , Memoria a Corto Plazo/fisiología , Memoria a Corto Plazo/efectos de los fármacos , Ratas Sprague-Dawley , Caracteres Sexuales
4.
Cereb Cortex ; 34(2)2024 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-38314581

RESUMEN

Neural circuits support behavioral adaptations by integrating sensory and motor information with reward and error-driven learning signals, but it remains poorly understood how these signals are distributed across different levels of the corticohippocampal hierarchy. We trained rats on a multisensory object-recognition task and compared visual and tactile responses of simultaneously recorded neuronal ensembles in somatosensory cortex, secondary visual cortex, perirhinal cortex, and hippocampus. The sensory regions primarily represented unisensory information, whereas hippocampus was modulated by both vision and touch. Surprisingly, the sensory cortices and the hippocampus coded object-specific information, whereas the perirhinal cortex did not. Instead, perirhinal cortical neurons signaled trial outcome upon reward-based feedback. A majority of outcome-related perirhinal cells responded to a negative outcome (reward omission), whereas a minority of other cells coded positive outcome (reward delivery). Our results highlight a distributed neural coding of multisensory variables in the cortico-hippocampal hierarchy. Notably, the perirhinal cortex emerges as a crucial region for conveying motivational outcomes, whereas distinct functions related to object identity are observed in the sensory cortices and hippocampus.


Asunto(s)
Corteza Perirrinal , Ratas , Animales , Hipocampo/fisiología , Percepción Visual/fisiología , Lóbulo Parietal , Recompensa
5.
Cereb Cortex ; 34(6)2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38879816

RESUMEN

Observers can selectively deploy attention to regions of space, moments in time, specific visual features, individual objects, and even specific high-level categories-for example, when keeping an eye out for dogs while jogging. Here, we exploited visual periodicity to examine how category-based attention differentially modulates selective neural processing of face and non-face categories. We combined electroencephalography with a novel frequency-tagging paradigm capable of capturing selective neural responses for multiple visual categories contained within the same rapid image stream (faces/birds in Exp 1; houses/birds in Exp 2). We found that the pattern of attentional enhancement and suppression for face-selective processing is unique compared to other object categories: Where attending to non-face objects strongly enhances their selective neural signals during a later stage of processing (300-500 ms), attentional enhancement of face-selective processing is both earlier and comparatively more modest. Moreover, only the selective neural response for faces appears to be actively suppressed by attending towards an alternate visual category. These results underscore the special status that faces hold within the human visual system, and highlight the utility of visual periodicity as a powerful tool for indexing selective neural processing of multiple visual categories contained within the same image sequence.


Asunto(s)
Atención , Electroencefalografía , Atención/fisiología , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Periodicidad , Reconocimiento Facial/fisiología , Estimulación Luminosa/métodos , Reconocimiento Visual de Modelos/fisiología , Encéfalo/fisiología , Percepción Visual/fisiología
6.
Proc Natl Acad Sci U S A ; 119(34): e2203165119, 2022 08 23.
Artículo en Inglés | MEDLINE | ID: mdl-35969775

RESUMEN

Memory consolidation is promoted by sleep. However, there is also evidence for consolidation into long-term memory during wakefulness via processes that preferentially affect nonhippocampal representations. We compared, in rats, the effects of 2-h postencoding periods of sleep and wakefulness on the formation of long-term memory for objects and their associated environmental contexts. We employed a novel-object recognition (NOR) task, using object exploration and exploratory rearing as behavioral indicators of these memories. Remote recall testing (after 1 wk) confirmed significant long-term NOR memory under both conditions, with NOR memory after sleep predicted by the occurrence of EEG spindle-slow oscillation coupling. Rats in the sleep group decreased their exploratory rearing at recall testing, revealing successful recall of the environmental context. By contrast, rats that stayed awake after encoding showed equally high levels of rearing upon remote testing as during encoding, indicating that context memory was lost. Disruption of hippocampal function during the postencoding interval (by muscimol administration) suppressed long-term NOR memory together with context memory formation when animals slept, but enhanced NOR memory when they were awake during this interval. Testing remote recall in a context different from that during encoding impaired NOR memory in the sleep condition, while exploratory rearing was increased. By contrast, NOR memory in the wake rats was preserved and actually superior to that after sleep. Our findings indicate two distinct modes of long-term memory formation: Sleep consolidation is hippocampus dependent and implicates event-context binding, whereas wake consolidation is impaired by hippocampal activation and strengthens context-independent representations.


Asunto(s)
Consolidación de la Memoria , Memoria a Largo Plazo , Sueño , Vigilia , Animales , Consolidación de la Memoria/fisiología , Memoria a Largo Plazo/fisiología , Recuerdo Mental/fisiología , Ratas , Sueño/fisiología , Vigilia/fisiología
7.
Nano Lett ; 24(32): 9937-9945, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39092599

RESUMEN

The processing of multicolor noisy images in visual neuromorphic devices requires selective absorption at specific wavelengths; however, it is difficult to achieve this because the spectral absorption range of the device is affected by the type of material. Surprisingly, the absorption range of perovskite materials can be adjusted by doping. Herein, a CdCl2 co-doped CsPbBr3 nanocrystal-based photosensitive synaptic transistor (PST) is reported. By decreasing the doping concentration, the response of the PST to short-wavelength light is gradually enhanced, and even weak light of 40 µW·cm-2 can be detected. Benefiting from the excellent color selectivity of the PST device, the device array is applied to feature extraction of target blue items and removal of red and green noise, which results in the recognition accuracy of 95% for the noisy MNIST data set. This work provides new ideas for the application of novel transistors integrating sensors and storage computing.

8.
J Neurosci ; 43(24): 4487-4497, 2023 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-37160361

RESUMEN

When we fixate an object, visual information is continuously received on the retina. Several studies observed behavioral oscillations in perceptual sensitivity across such stimulus time, and these fluctuations have been linked to brain oscillations. However, whether specific brain areas show oscillations across stimulus time (i.e., different time points of the stimulus being more or less processed, in a rhythmic fashion) has not been investigated. Here, we revealed random areas of face images at random moments across time and recorded the brain activity of male and female human participants using MEG while they performed two recognition tasks. This allowed us to quantify how each snapshot of visual information coming from the stimulus is processed across time and across the brain. Oscillations across stimulus time (rhythmic sampling) were mostly visible in early visual areas, at theta, alpha, and low beta frequencies. We also found that they contributed to brain activity more than previously investigated rhythmic processing (oscillations in the processing of a single snapshot of visual information). Nonrhythmic sampling was also visible at later latencies across the visual cortex, either in the form of a transient processing of early stimulus time points or of a sustained processing of the whole stimulus. Our results suggest that successive cycles of ongoing brain oscillations process stimulus information incoming at successive moments. Together, these results advance our understanding of the oscillatory neural dynamics associated with visual processing and show the importance of considering the temporal dimension of stimuli when studying visual recognition.SIGNIFICANCE STATEMENT Several behavioral studies have observed oscillations in perceptual sensitivity over the duration of stimulus presentation, and these fluctuations have been linked to brain oscillations. However, oscillations across stimulus time in the brain have not been studied. Here, we developed an MEG paradigm to quantify how visual information received at each moment during fixation is processed through time and across the brain. We showed that different snapshots of a stimulus are distinctly processed in many brain areas and that these fluctuations are oscillatory in early visual areas. Oscillations across stimulus time were more prevalent than previously studied oscillations across processing time. These results increase our understanding of how neural oscillations interact with the visual processing of temporal stimuli.


Asunto(s)
Encéfalo , Percepción Visual , Humanos , Masculino , Femenino , Reconocimiento en Psicología , Magnetoencefalografía/métodos , Estimulación Luminosa/métodos
9.
J Neurosci ; 43(3): 484-500, 2023 01 18.
Artículo en Inglés | MEDLINE | ID: mdl-36535769

RESUMEN

Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while human participants (female and male) viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital, ventral-temporal, and posterior parietal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together, our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings.SIGNIFICANCE STATEMENT When we see a line drawing, we effortlessly recognize it as an object in the world despite its simple and abstract style. Here we asked to what extent this correspondence in perception is reflected in the brain. To answer this question, we measured how neural processing of objects depicted as photographs and line drawings with varying levels of detail (from natural images to abstract line drawings) evolves over space and time. We find broad commonalities in the spatiotemporal dynamics and the neural representations underlying the perception of photographs and even abstract drawings. These results indicate a shared basic mechanism supporting recognition of drawings and natural images.


Asunto(s)
Reconocimiento Visual de Modelos , Percepción Visual , Humanos , Masculino , Femenino , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Imagen por Resonancia Magnética/métodos , Lóbulo Parietal/fisiología , Mapeo Encefálico/métodos
10.
J Neurosci ; 43(13): 2424-2438, 2023 03 29.
Artículo en Inglés | MEDLINE | ID: mdl-36859306

RESUMEN

Individuals on the autism spectrum often exhibit atypicality in their sensory perception, but the neural underpinnings of these perceptual differences remain incompletely understood. One proposed mechanism is an imbalance in higher-order feedback re-entrant inputs to early sensory cortices during sensory perception, leading to increased propensity to focus on local object features over global context. We explored this theory by measuring visual evoked potentials during contour integration as considerable work has revealed that these processes are largely driven by feedback inputs from higher-order ventral visual stream regions. We tested the hypothesis that autistic individuals would have attenuated evoked responses to illusory contours compared with neurotypical controls. Electrophysiology was acquired while 29 autistic and 31 neurotypical children (7-17 years old, inclusive of both males and females) passively viewed a random series of Kanizsa figure stimuli, each consisting of four inducers that were aligned either at random rotational angles or such that contour integration would form an illusory square. Autistic children demonstrated attenuated automatic contour integration over lateral occipital regions relative to neurotypical controls. The data are discussed in terms of the role of predictive feedback processes on perception of global stimulus features and the notion that weakened "priors" may play a role in the visual processing anomalies seen in autism.SIGNIFICANCE STATEMENT Children on the autism spectrum differ from typically developing children in many aspects of their processing of sensory stimuli. One proposed mechanism for these differences is an imbalance in higher-order feedback to primary sensory regions, leading to an increased focus on local object features rather than global context. However, systematic investigation of these feedback mechanisms remains limited. Using EEG and a visual illusion paradigm that is highly dependent on intact feedback processing, we demonstrated significant disruptions to visual feedback processing in children with autism. This provides much needed experimental evidence that advances our understanding of the contribution of feedback processing to visual perception in autism spectrum disorder.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Ilusiones , Masculino , Femenino , Humanos , Niño , Adolescente , Potenciales Evocados Visuales , Retroalimentación Sensorial , Retroalimentación , Percepción Visual/fisiología , Ilusiones/fisiología
11.
J Neurosci ; 43(10): 1731-1741, 2023 03 08.
Artículo en Inglés | MEDLINE | ID: mdl-36759190

RESUMEN

Deep neural networks (DNNs) are promising models of the cortical computations supporting human object recognition. However, despite their ability to explain a significant portion of variance in neural data, the agreement between models and brain representational dynamics is far from perfect. We address this issue by asking which representational features are currently unaccounted for in neural time series data, estimated for multiple areas of the ventral stream via source-reconstructed magnetoencephalography data acquired in human participants (nine females, six males) during object viewing. We focus on the ability of visuo-semantic models, consisting of human-generated labels of object features and categories, to explain variance beyond the explanatory power of DNNs alone. We report a gradual reversal in the relative importance of DNN versus visuo-semantic features as ventral-stream object representations unfold over space and time. Although lower-level visual areas are better explained by DNN features starting early in time (at 66 ms after stimulus onset), higher-level cortical dynamics are best accounted for by visuo-semantic features starting later in time (at 146 ms after stimulus onset). Among the visuo-semantic features, object parts and basic categories drive the advantage over DNNs. These results show that a significant component of the variance unexplained by DNNs in higher-level cortical dynamics is structured and can be explained by readily nameable aspects of the objects. We conclude that current DNNs fail to fully capture dynamic representations in higher-level human visual cortex and suggest a path toward more accurate models of ventral-stream computations.SIGNIFICANCE STATEMENT When we view objects such as faces and cars in our visual environment, their neural representations dynamically unfold over time at a millisecond scale. These dynamics reflect the cortical computations that support fast and robust object recognition. DNNs have emerged as a promising framework for modeling these computations but cannot yet fully account for the neural dynamics. Using magnetoencephalography data acquired in human observers during object viewing, we show that readily nameable aspects of objects, such as 'eye', 'wheel', and 'face', can account for variance in the neural dynamics over and above DNNs. These findings suggest that DNNs and humans may in part rely on different object features for visual recognition and provide guidelines for model improvement.


Asunto(s)
Reconocimiento Visual de Modelos , Semántica , Masculino , Femenino , Humanos , Redes Neurales de la Computación , Percepción Visual , Encéfalo , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos
12.
J Neurosci ; 43(16): 2960-2972, 2023 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-36922027

RESUMEN

The organizational principles of the object space represented in the human ventral visual cortex are debated. Here we contrast two prominent proposals that, in addition to an organization in terms of animacy, propose either a representation related to aspect ratio (stubby-spiky) or to the distinction between faces and bodies. We designed a critical test that dissociates the latter two categories from aspect ratio and investigated responses from human fMRI (of either sex) and deep neural networks (BigBiGAN). Representational similarity and decoding analyses showed that the object space in the occipitotemporal cortex and BigBiGAN was partially explained by animacy but not by aspect ratio. Data-driven approaches showed clusters for face and body stimuli and animate-inanimate separation in the representational space of occipitotemporal cortex and BigBiGAN, but no arrangement related to aspect ratio. In sum, the findings go in favor of a model in terms of an animacy representation combined with strong selectivity for faces and bodies.SIGNIFICANCE STATEMENT We contrasted animacy, aspect ratio, and face-body as principal dimensions characterizing object space in the occipitotemporal cortex. This is difficult to test, as typically faces and bodies differ in aspect ratio (faces are mostly stubby and bodies are mostly spiky). To dissociate the face-body distinction from the difference in aspect ratio, we created a new stimulus set in which faces and bodies have a similar and very wide distribution of values along the shape dimension of the aspect ratio. Brain imaging (fMRI) with this new stimulus set showed that, in addition to animacy, the object space is mainly organized by the face-body distinction and selectivity for aspect ratio is minor (despite its wide distribution).


Asunto(s)
Reconocimiento Visual de Modelos , Corteza Visual , Humanos , Reconocimiento Visual de Modelos/fisiología , Mapeo Encefálico/métodos , Corteza Cerebral/fisiología , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Encéfalo , Imagen por Resonancia Magnética , Estimulación Luminosa/métodos
13.
J Neurosci ; 43(29): 5406-5413, 2023 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-37369591

RESUMEN

Material properties, such as softness or stickiness, determine how an object can be used. Based on our real-life experience, we form strong expectations about how objects should behave under force, given their typical material properties. Such expectations have been shown to modulate perceptual processes, but we currently do not know how expectation influences the temporal dynamics of the cortical visual analysis for objects and their materials. Here, we tracked the neural representations of expected and unexpected material behaviors using time-resolved EEG decoding in a violation-of-expectation paradigm, where objects fell to the ground and deformed in expected or unexpected ways. Participants were 25 men and women. Our study yielded three key results: First, both objects and materials were represented rapidly and in a temporally sustained fashion. Second, objects exhibiting unexpected material behaviors were more successfully decoded than objects exhibiting expected behaviors within 190 ms after the impact, which might indicate additional processing demands when expectations are unmet. Third, general signals of expectation fulfillment that generalize across specific objects and materials were found within the first 150 ms after the impact. Together, our results provide new insights into the temporal neural processing cascade that underlies the analysis of real-world material behaviors. They reveal a sequence of predictions, with cortical signals progressing from a general signature of expectation fulfillment toward increased processing of unexpected material behaviors.SIGNIFICANCE STATEMENT In the real world, we can make accurate predictions about how an object's material shapes its behavior: For instance, we know that cups are typically made of porcelain and shatter when we accidentally drop them. Here, we use EEG to experimentally test how expectations about material behaviors impact neural processing. We showed our participants videos of objects that exhibited expected material behaviors (e.g., a glass shattering when falling to the ground) or unexpected material behaviors (e.g., a glass melting on impact). Our results reveal a hierarchy of predictions in cortex: The visual system rapidly generates signals that index whether expectations about material behaviors are met. These signals are followed by increased processing of objects displaying unexpected material behaviors.


Asunto(s)
Electroencefalografía , Reconocimiento Visual de Modelos , Masculino , Humanos , Femenino
14.
Neuroimage ; 293: 120626, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38677632

RESUMEN

Spatio-temporal patterns of evoked brain activity contain information that can be used to decode and categorize the semantic content of visual stimuli. However, this procedure can be biased by low-level image features independently of the semantic content present in the stimuli, prompting the need to understand the robustness of different models regarding these confounding factors. In this study, we trained machine learning models to distinguish between concepts included in the publicly available THINGS-EEG dataset using electroencephalography (EEG) data acquired during a rapid serial visual presentation paradigm. We investigated the contribution of low-level image features to decoding accuracy in a multivariate model, utilizing broadband data from all EEG channels. Additionally, we explored a univariate model obtained through data-driven feature selection applied to the spatial and frequency domains. While the univariate models exhibited better decoding accuracy, their predictions were less robust to the confounding effect of low-level image statistics. Notably, some of the models maintained their accuracy even after random replacement of the training dataset with semantically unrelated samples that presented similar low-level content. In conclusion, our findings suggest that model optimization impacts sensitivity to confounding factors, regardless of the resulting classification performance. Therefore, the choice of EEG features for semantic decoding should ideally be informed by criteria beyond classifier performance, such as the neurobiological mechanisms under study.


Asunto(s)
Electroencefalografía , Semántica , Humanos , Electroencefalografía/métodos , Femenino , Masculino , Adulto , Adulto Joven , Aprendizaje Automático , Encéfalo/fisiología
15.
J Neurophysiol ; 132(3): 628-642, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-38958283

RESUMEN

Humans rely on predictive and integrative mechanisms during visual processing to efficiently resolve incomplete or ambiguous sensory signals. Although initial low-level sensory data are conveyed by feedforward connections, feedback connections are believed to shape sensory processing through automatic conveyance of statistical probabilities based on prior exposure to stimulus configurations. Individuals with autism spectrum disorder (ASD) show biases in stimulus processing toward parts rather than wholes, suggesting their sensory processing may be less shaped by statistical predictions acquired through prior exposure to global stimulus properties. Investigations of illusory contour (IC) processing in neurotypical (NT) adults have established a well-tested marker of contour integration characterized by a robust modulation of the visually evoked potential (VEP)-the IC-effect-that occurs over lateral occipital scalp during the timeframe of the visual N1 component. Converging evidence strongly supports the notion that this IC-effect indexes a signal with significant feedback contributions. Using high-density VEPs, we compared the IC-effect in 6- to 17-yr-old children with ASD (n = 32) or NT development (n = 53). Both groups of children generated an IC-effect that was equivalent in amplitude. However, the IC-effect notably onset 21 ms later in ASD, even though initial VEP afference was identical across groups. This suggests that feedforward information predominated during perceptual processing for 15% longer in ASD compared with NT children. This delay in the feedback-dependent IC-effect, in the context of known developmental differences between feedforward and feedback fibers, suggests a potential pathophysiological mechanism of visual processing in ASD, whereby ongoing stimulus processing is less shaped by visual feedback.NEW & NOTEWORTHY Children with autism often present with an atypical visual perceptual style that emphasizes parts or details over the whole. Using electroencephalography (EEG), this study identifies delays in the visual feedback from higher-order sensory brain areas to primary sensory regions. Because this type of visual feedback is thought to carry information about prior sensory experiences, individuals with autism may have difficulty efficiently using prior experience or putting together parts into a whole to help make sense of incoming new visual information. This provides empirical neural evidence to support theories of disrupted sensory perception mechanisms in autism.


Asunto(s)
Trastorno del Espectro Autista , Potenciales Evocados Visuales , Humanos , Adolescente , Niño , Masculino , Potenciales Evocados Visuales/fisiología , Femenino , Trastorno del Espectro Autista/fisiopatología , Electroencefalografía , Percepción de Forma/fisiología , Percepción Visual/fisiología
16.
Hippocampus ; 34(2): 88-99, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38073523

RESUMEN

The hippocampal formation is vulnerable to the process of normal aging. In humans, the extent of this age-related deterioration varies among individuals. Long-Evans rats replicate these individual differences as they age, and therefore they serve as a valuable model system to study aging in the absence of neurodegenerative diseases. In the Morris water maze, aged memory-unimpaired (AU) rats navigate to remembered goal locations as effectively as young rats and demonstrate minimal alterations in physiological markers of synaptic plasticity, whereas aged memory-impaired (AI) rats show impairments in both spatial navigation skills and cellular and molecular markers of plasticity. The present study investigates whether another cognitive domain is affected similarly to navigation in aged Long-Evans rats. We tested the ability of young, AU, and AI animals to recognize novel object-place-context (OPC) configurations and found that performance on the novel OPC recognition paradigm was significantly correlated with performance on the Morris water maze. In the first OPC test, young and AU rats, but not AI rats, successfully recognized and preferentially explored objects in novel OPC configurations. In a second test with new OPC configurations, all age groups showed similar OPC associative recognition memory. The results demonstrated similarities in the behavioral expression of associative, episodic-like memory between young and AU rats and revealed age-related, individual differences in functional decline in both navigation and episodic-like memory abilities.


Asunto(s)
Hipocampo , Aprendizaje Espacial , Humanos , Ratas , Animales , Anciano , Ratas Long-Evans , Aprendizaje por Laberinto/fisiología , Hipocampo/fisiología , Recuerdo Mental , Envejecimiento/fisiología
17.
Eur J Neurosci ; 59(7): 1743-1752, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38238909

RESUMEN

Perirhinal cortex is a brain area that has been considered crucial for the object recognition memory (ORM). However, with the use of an ORM enhancer named RGS14414 as gain-in-function tool, we show here that frontal association cortex and not the Perirhinal cortex is essential for the ORM of objects with complex features that consisted of detailed drawing on the object surface (complex ORM). An expression of RGS14414, in rat brain frontal association cortex, induced the formation of long-term complex ORM, whereas the expression of the same memory enhancer in Perirhinal cortex failed to produce this effect. Instead, RGS14414 expression in Perirhinal cortex caused the formation of ORM of objects with simple features that consisted of the shape of object (simple ORM). Further, a selective elimination of frontal association cortex neurons by treatment with an immunotoxin Ox7-SAP completely abrogated the formation of complex ORM. Thus, our results suggest that frontal association cortex plays a key role in processing of a high-order recognition memory information in brain.


Asunto(s)
Reconocimiento en Psicología , Percepción Visual , Ratas , Animales , Reconocimiento en Psicología/fisiología , Encéfalo , Memoria a Largo Plazo
18.
Biochem Biophys Res Commun ; 710: 149872, 2024 05 28.
Artículo en Inglés | MEDLINE | ID: mdl-38593621

RESUMEN

Protein modifications importantly contribute to memory formation. Protein acetylation is a post-translational modification of proteins that regulates memory formation. Acetylation level is determined by the relative activities of acetylases and deacetylases. Crebinostat is a histone deacetylase inhibitor. Here we show that in an object recognition task, crebinostat facilitates memory formation by a weak training. Further, this compound enhances acetylation of α-tubulin, and reduces the level of histone deacetylase 6, an α-tubulin deacetylase. The results suggest that enhanced acetylation of α-tubulin by crebinostat contributes to its facilitatory effect on memory formation.


Asunto(s)
Histona Desacetilasas , Tubulina (Proteína) , Tubulina (Proteína)/metabolismo , Histona Desacetilasas/metabolismo , Histona Desacetilasa 6/metabolismo , Compuestos de Bifenilo , Hidrazinas , Inhibidores de Histona Desacetilasas/farmacología , Acetilación
19.
Neuropsychol Rev ; 2024 Jun 22.
Artículo en Inglés | MEDLINE | ID: mdl-38907905

RESUMEN

Object recognition memory allows us to identify previously seen objects. This type of declarative memory is a primary process for learning. Despite its crucial role in everyday life, object recognition has received far less attention in ADHD research compared to verbal recognition memory. In addition to the existence of a small number of published studies, the results have been inconsistent, possibly due to the diversity of tasks used to assess recognition memory. In the present meta-analysis, we have collected studies from Web of Science, Scopus, PubMed, and Google Scholar databases up to May 2023. We have compiled studies that assessed visual object recognition memory with specific visual recognition tests (sample-match delayed tasks) in children and adolescents diagnosed with ADHD. A total of 28 studies with 1619 participants diagnosed with ADHD were included. The studies were assessed for risk of bias using the Quadas-2 tool and for each study, Cohen's d was calculated to estimate the magnitude of the difference in performance between groups. As a main result, we have found a worse recognition memory performance in ADHD participants when compared to their matched controls (overall Cohen's d ~ 0.492). We also observed greater heterogeneity in the magnitude of this deficit among medicated participants compared to non-medicated individuals, as well as a smaller deficit in studies with a higher proportion of female participants. The magnitude of the object recognition memory impairment in ADHD also seems to depend on the assessment method used.

20.
Psychol Sci ; 35(7): 814-824, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38889285

RESUMEN

Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.


Asunto(s)
Percepción Auditiva , Percepción Visual , Humanos , Percepción Auditiva/fisiología , Adulto Joven , Adulto , Masculino , Femenino , Percepción Visual/fisiología , Ruido , Estimulación Acústica
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda