Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36.824
Filter
1.
PLoS One ; 19(9): e0304285, 2024.
Article in English | MEDLINE | ID: mdl-39241039

ABSTRACT

Art research has long aimed to unravel the complex associations between specific attributes, such as color, complexity, and emotional expressiveness, and art judgments, including beauty, creativity, and liking. However, the fundamental distinction between attributes as inherent characteristics or features of the artwork and judgments as subjective evaluations remains an exciting topic. This paper reviews the literature of the last half century, to identify key attributes, and employs machine learning, specifically Gradient Boosted Decision Trees (GBDT), to predict 13 art judgments along 17 attributes. Ratings from 78 art novice participants were collected for 54 Western artworks. Our GBDT models successfully predicted 13 judgments significantly. Notably, judged creativity and disturbing/irritating judgments showed the highest predictability, with the models explaining 31% and 32% of the variance, respectively. The attributes emotional expressiveness, valence, symbolism, as well as complexity emerged as consistent and significant contributors to the models' performance. Content-representational attributes played a more prominent role than formal-perceptual attributes. Moreover, we found in some cases non-linear relationships between attributes and judgments with sudden inclines or declines around medium levels of the rating scales. By uncovering these underlying patterns and dynamics in art judgment behavior, our research provides valuable insights to advance the understanding of aesthetic experiences considering visual art, inform cultural practices, and inspire future research in the field of art appreciation.


Subject(s)
Art , Judgment , Machine Learning , Humans , Female , Male , Adult , Emotions , Young Adult , Visual Perception/physiology , Creativity
2.
Sci Rep ; 14(1): 20492, 2024 09 03.
Article in English | MEDLINE | ID: mdl-39242623

ABSTRACT

A social individual needs to effectively manage the amount of complex information in his or her environment relative to his or her own purpose to obtain relevant information. This paper presents a neural architecture aiming to reproduce attention mechanisms (alerting/orienting/selecting) that are efficient in humans during audiovisual tasks in robots. We evaluated the system based on its ability to identify relevant sources of information on faces of subjects emitting vowels. We propose a developmental model of audio-visual attention (MAVA) combining Hebbian learning and a competition between saliency maps based on visual movement and audio energy. MAVA effectively combines bottom-up and top-down information to orient the system toward pertinent areas. The system has several advantages, including online and autonomous learning abilities, low computation time and robustness to environmental noise. MAVA outperforms other artificial models for detecting speech sources under various noise conditions.


Subject(s)
Attention , Robotics , Humans , Robotics/methods , Attention/physiology , Infant , Learning/physiology , Visual Perception/physiology , Language Development , Auditory Perception/physiology , Language
3.
Sci Rep ; 14(1): 20852, 2024 09 06.
Article in English | MEDLINE | ID: mdl-39242827

ABSTRACT

When studying the working memory (WM), the 'slot model' and the 'resource model' are two main theories used to describe how information retention occurs. The slot model shows that WM capacity consists of a certain number of predefined slots available for information storage. This theory explains that there is a binary condition during information recall in which information is either wholly maintained within a slot or forgotten. The resource model has a resolution-based approach, suggesting a continuous resource able to be distributed among a number of items in WM capacity. Recently hybrid models have been introduced, suggesting that WM may not strictly conform to only one model. Accordingly, to understand the relationship between two of the most widely used paradigms in WM evaluation, we implemented a correlational assessment in two different psychophysics tasks, an analog recall paradigm with sequential bar presentation and a delayed match-to-sample (DMS) task with checkerboard stimuli. Our study revealed significant correlations between WM performance in the DMS task and recall error, precision, and sources of errors in the sequential paradigm. Overall, the findings emphasize the importance of considering both tasks in understanding WM processes, as they shed light on the debate between the slot and resource models by revealing overlapping elements in both theories and the tasks used to evaluate WM capacity.


Subject(s)
Memory, Short-Term , Mental Recall , Memory, Short-Term/physiology , Humans , Male , Female , Adult , Mental Recall/physiology , Young Adult , Visual Perception/physiology , Models, Psychological
4.
Curr Biol ; 34(17): R831-R833, 2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39255769

ABSTRACT

'Jump scares' are particularly robust when visuals are paired with coherent sound. A new study demonstrates that connectivity between the superior colliculus and parabigeminal nucleus generates multimodal enhancement of visually triggered defensiveness, revealing a novel multisensory threat augmentation mechanism.


Subject(s)
Superior Colliculi , Animals , Superior Colliculi/physiology , Mesencephalon/physiology , Visual Perception/physiology , Auditory Perception/physiology , Humans
5.
J Neuroeng Rehabil ; 21(1): 155, 2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39252006

ABSTRACT

BACKGROUND: Planning and executing movements requires the integration of different sensory modalities, such as vision and proprioception. However, neurological diseases like stroke can lead to full or partial loss of proprioception, resulting in impaired movements. Recent advances focused on providing additional sensory feedback to patients to compensate for the sensory loss, proving vibrotactile stimulation to be a viable option as it is inexpensive and easy to implement. Here, we test how such vibrotactile information can be integrated with visual signals to estimate the spatial location of a reach target. METHODS: We used a center-out reach paradigm with 31 healthy human participants to investigate how artificial vibrotactile stimulation can be integrated with visual-spatial cues indicating target location. Specifically, we provided multisite vibrotactile stimulation to the moving dominant arm using eccentric rotating mass (ERM) motors. As the integration of inputs across multiple sensory modalities becomes especially relevant when one of them is uncertain, we additionally modulated the reliability of visual cues. We then compared the weighing of vibrotactile and visual inputs as a function of visual uncertainty to predictions from the maximum likelihood estimation (MLE) framework to decide if participants achieve quasi-optimal integration. RESULTS: Our results show that participants could estimate target locations based on vibrotactile instructions. After short training, combined visual and vibrotactile cues led to higher hit rates and reduced reach errors when visual cues were uncertain. Additionally, we observed lower reaction times in trials with low visual uncertainty when vibrotactile stimulation was present. Using MLE predictions, we found that integration of vibrotactile and visual cues followed optimal integration when vibrotactile cues required the detection of one or two active motors. However, if estimating the location of a target required discriminating the intensities of two cues, integration violated MLE predictions. CONCLUSION: We conclude that participants can quickly learn to integrate visual and artificial vibrotactile information. Therefore, using additional vibrotactile stimulation may serve as a promising way to improve rehabilitation or the control of prosthetic devices by patients suffering loss of proprioception.


Subject(s)
Cues , Psychomotor Performance , Vibration , Visual Perception , Humans , Male , Female , Adult , Visual Perception/physiology , Psychomotor Performance/physiology , Young Adult , Feedback, Sensory/physiology , Proprioception/physiology , Touch Perception/physiology , Uncertainty , Physical Stimulation/methods , Space Perception/physiology , Movement/physiology
6.
Sci Rep ; 14(1): 20923, 2024 09 09.
Article in English | MEDLINE | ID: mdl-39251764

ABSTRACT

Does congruence between auditory and visual modalities affect aesthetic experience? While cross-modal correspondences between vision and hearing are well-documented, previous studies show conflicting results regarding whether audiovisual correspondence affects subjective aesthetic experience. Here, in collaboration with the Kentler International Drawing Space (NYC, USA), we depart from previous research by using music specifically composed to pair with visual art in the professionally-curated Music as Image and Metaphor exhibition. Our pre-registered online experiment consisted of 4 conditions: Audio, Visual, Audio-Visual-Intended (artist-intended pairing of art/music), and Audio-Visual-Random (random shuffling). Participants (N = 201) were presented with 16 pieces and could click to proceed to the next piece whenever they liked. We used time spent as an implicit index of aesthetic interest. Additionally, after each piece, participants were asked about their subjective experience (e.g., feeling moved). We found that participants spent significantly more time with Audio, followed by Audiovisual, followed by Visual pieces; however, they felt most moved in the Audiovisual (bi-modal) conditions. Ratings of audiovisual correspondence were significantly higher for the Audiovisual-Intended compared to Audiovisual-Random condition; interestingly, though, there were no significant differences between intended and random conditions on any other subjective rating scale, or for time spent. Collectively, these results call into question the relationship between cross-modal correspondence and aesthetic appreciation. Additionally, the results complicate the use of time spent as an implicit measure of aesthetic experience.


Subject(s)
Auditory Perception , Esthetics , Music , Visual Perception , Humans , Music/psychology , Female , Esthetics/psychology , Male , Adult , Visual Perception/physiology , Auditory Perception/physiology , Young Adult , Art , Photic Stimulation , Acoustic Stimulation , Adolescent
7.
Trends Neurosci Educ ; 36: 100238, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39266122

ABSTRACT

BACKGROUND: Problem-solving and learning in mathematics involves sensory perception and processing. Multisensory integration may contribute by enhancing sensory estimates. This study aims to assess if combining visual and somatosensory information improves elementary students' perimeter and area estimates. METHODS: 87 4th graders compared rectangles with respect to area or perimeter either solely using visual observation or additionally with somatosensory information. Three experiments targeted different task aspects. Statistical analyses tested success rates and response times. RESULTS: Contrary to expectations, adding somatosensory information did not boost success rates for area and perimeter comparison. Response time even increased with adding somatosensory information. Children's difficulty in accurately tracing figures negatively impacted the success rate of area comparisons. DISCUSSION: Results suggest visual observation alone suffices for accurately estimating and comparing area and perimeter of rectangles in 4th graders. IMPLICATIONS: Careful deliberation on the inclusion of somatosensory information in mathematical tasks concerning perimeter and area estimations of rectangles is recommended.


Subject(s)
Mathematics , Reaction Time , Schools , Visual Perception , Humans , Child , Female , Male , Reaction Time/physiology , Visual Perception/physiology , Problem Solving , Learning/physiology , Touch Perception/physiology
8.
J Vis ; 24(9): 8, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39254964

ABSTRACT

Classic change blindness is the phenomenon where seemingly obvious changes that coincide with visual disruptions (such as blinks or brief blanks) go unnoticed by an attentive observer. Some early work into the causes of classic change blindness suggested that any pre-change stimulus representation is overwritten by a representation of the altered post-change stimulus, preventing change detection. However, recent work revealed that, even when observers do maintain memory representations of both the pre- and post-change stimulus states, they can still miss the change, suggesting that change blindness can also arise from a failure to compare the stored representations. Here, we studied slow change blindness, a related phenomenon that occurs even in the absence of visual disruptions when the change occurs sufficiently slowly, to determine whether it could be explained by conclusions from classic change blindness. Across three different slow change blindness experiments we found that observers who consistently failed to notice the change had access to at least two memory representations of the changing display. One representation was precise but short lived: a detailed representation of the more recent stimulus states, but fragile. The other representation lasted longer but was fairly general: stable but too coarse to differentiate the various stages of the change. These findings suggest that, although multiple representations are formed, the failure to compare hypotheses might not explain slow change blindness; even if a comparison were made, the representations would be too sparse (longer term stores) or too fragile (short-lived stores) for such comparison to inform about the change.


Subject(s)
Photic Stimulation , Humans , Photic Stimulation/methods , Attention/physiology , Memory/physiology , Adult , Visual Perception/physiology , Young Adult , Male , Female
9.
J Vis ; 24(9): 9, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39259169

ABSTRACT

The contents of visual perception are inherently dynamic-just as we experience objects in space, so too events in time. The boundaries between these events have downstream consequences. For example, memory for incidentally encountered items is impaired when walking through a doorway, perhaps because event boundaries serve as cues to clear obsolete information from previous events. Although this kind of "memory flushing" can be adaptive, work on visual working memory (VWM) has focused on the opposite function of active maintenance in the face of distraction. How do these two cognitive operations interact? In this study, observers watched animations in which they walked through three-dimensionally rendered rooms with picture frames on the walls. Within the frames, observers either saw images that they had to remember ("encoding") or recalled images they had seen in the immediately preceding frame ("test"). Half of the time, a doorway was crossed during the delay between encoding and test. Across experiments, there was a consistent memory decrement for the first image encoded in the doorway compared to the no-doorway condition while equating time elapsed, distance traveled, and distractibility of the doorway. This decrement despite top-down VWM efforts highlights the power of event boundaries to structure what and when we forget.


Subject(s)
Memory, Short-Term , Humans , Memory, Short-Term/physiology , Young Adult , Visual Perception/physiology , Male , Attention/physiology , Female , Photic Stimulation/methods , Adult , Cues
10.
J Exp Child Psychol ; 248: 106046, 2024 Dec.
Article in English | MEDLINE | ID: mdl-39241321

ABSTRACT

Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.


Subject(s)
Pitch Perception , Humans , Infant , Male , Female , Pitch Perception/physiology , Visual Perception/physiology , Learning/physiology , Child Development/physiology , Communication , Photic Stimulation , Acoustic Stimulation
11.
Cereb Cortex ; 34(9)2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39233375

ABSTRACT

Our understanding of the neurobiology underlying cognitive dysfunction in persons with cerebral palsy is very limited, especially in the neurocognitive domain of visual selective attention. This investigation utilized magnetoencephalography and an Eriksen arrow-based flanker task to quantify the dynamics underlying selective attention in a cohort of youth and adults with cerebral palsy (n = 31; age range = 9 to 47 yr) and neurotypical controls (n = 38; age range = 11 to 49 yr). The magnetoencephalography data were transformed into the time-frequency domain to identify neural oscillatory responses and imaged using a beamforming approach. The behavioral results indicated that all participants exhibited a flanker effect (greater response time for the incongruent compared to congruent condition) and that individuals with cerebral palsy were slower and less accurate during task performance. We computed interference maps to focus on the attentional component and found aberrant alpha (8 to 14 Hz) oscillations in the right primary visual cortices in the group with cerebral palsy. Alpha and theta (4 to 7 Hz) oscillations were also seen in the left and right insula, and these oscillations varied with age across all participants. Overall, persons with cerebral palsy exhibit deficiencies in the cortical dynamics serving visual selective attention, but these aberrations do not appear to be uniquely affected by age.


Subject(s)
Alpha Rhythm , Attention , Cerebral Palsy , Magnetoencephalography , Humans , Adult , Cerebral Palsy/physiopathology , Adolescent , Male , Female , Young Adult , Attention/physiology , Child , Middle Aged , Alpha Rhythm/physiology , Visual Perception/physiology , Photic Stimulation/methods , Reaction Time/physiology
12.
BMC Psychol ; 12(1): 469, 2024 Sep 02.
Article in English | MEDLINE | ID: mdl-39223690

ABSTRACT

In environments teeming with distractions, the ability to selectively focus on relevant information is crucial for advanced cognitive processing. Existing research using event-related potential (ERP) technology has shown active suppression of irrelevant stimuli during the consolidation phase of visual working memory (VWM). In previous studies, participants have always been given sufficient time to consolidate VWM, while suppressing distracting information. However, it remains unclear whether the suppression of irrelevant distractors requires continuous effort throughout their presence or whether this suppression is only necessary after the consolidation of task-relevant information. To address this question, our study examines whether distractor suppression is necessary in scenarios where consolidation time is limited. This research investigates the effect of varying presentation durations on the filtering of distractors in VWM. We tasked participants with memorizing two color stimuli and ignoring four distractors, presented for either 50 ms or 200 ms. Using ERP technology, we discovered that the distractor-induced distractor positivity (PD) amplitude is larger during longer presentation durations compared to shorter ones. These findings underscore the significant impact of presentation duration on the efficacy of distractor suppression in VWM, as prolonged exposure results in a stronger suppression effect on distractors. This study sheds light on the temporal dynamics of attention and memory, emphasizing the critical role of stimulus timing in cognitive tasks. These findings provide valuable insights into the mechanisms underlying VWM and have significant implications for models of attention and memory.


Subject(s)
Attention , Electroencephalography , Evoked Potentials , Memory, Short-Term , Visual Perception , Humans , Memory, Short-Term/physiology , Attention/physiology , Male , Female , Evoked Potentials/physiology , Young Adult , Adult , Visual Perception/physiology , Time Factors , Photic Stimulation
13.
J Vis ; 24(9): 1, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39226069

ABSTRACT

Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.


Subject(s)
Eye Movements , Spatial Memory , Virtual Reality , Humans , Female , Male , Young Adult , Adult , Eye Movements/physiology , Spatial Memory/physiology , Space Perception/physiology , Head Movements/physiology , Photic Stimulation/methods , Visual Perception/physiology , Reaction Time/physiology
14.
Psychol Sci ; 35(9): 1035-1047, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39222160

ABSTRACT

Statistical learning is a powerful mechanism that enables the rapid extraction of regularities from sensory inputs. Although numerous studies have established that statistical learning serves a wide range of cognitive functions, it remains unknown whether statistical learning impacts conscious access. To address this question, we applied multiple paradigms in a series of experiments (N = 153 adults): Two reaction-time-based breaking continuous flash suppression (b-CFS) experiments showed that probable objects break through suppression faster than improbable objects. A preregistered accuracy-based b-CFS experiment showed higher localization accuracy for suppressed probable (versus improbable) objects under identical presentation durations, thereby excluding the possibility of processing differences emerging after conscious access (e.g., criterion shifts). Consistent with these findings, a supplemental visual-masking experiment reaffirmed higher localization sensitivity to probable objects over improbable objects. Together, these findings demonstrate that statistical learning alters the competition for scarce conscious resources, thereby potentially contributing to established effects of statistical learning on higher-level cognitive processes that require consciousness.


Subject(s)
Awareness , Reaction Time , Humans , Awareness/physiology , Adult , Male , Female , Young Adult , Reaction Time/physiology , Learning/physiology , Consciousness/physiology , Visual Perception/physiology , Adolescent
15.
Sci Rep ; 14(1): 21445, 2024 09 13.
Article in English | MEDLINE | ID: mdl-39271909

ABSTRACT

Determining the extent to which the perceptual world can be recovered from language is a longstanding problem in philosophy and cognitive science. We show that state-of-the-art large language models can unlock new insights into this problem by providing a lower bound on the amount of perceptual information that can be extracted from language. Specifically, we elicit pairwise similarity judgments from GPT models across six psychophysical datasets. We show that the judgments are significantly correlated with human data across all domains, recovering well-known representations like the color wheel and pitch spiral. Surprisingly, we find that a model (GPT-4) co-trained on vision and language does not necessarily lead to improvements specific to the visual modality, and provides highly correlated predictions with human data irrespective of whether direct visual input is provided or purely textual descriptors. To study the impact of specific languages, we also apply the models to a multilingual color-naming task. We find that GPT-4 replicates cross-linguistic variation in English and Russian illuminating the interaction of language and perception.


Subject(s)
Judgment , Language , Humans , Judgment/physiology , Visual Perception/physiology
16.
Proc Natl Acad Sci U S A ; 121(37): e2408067121, 2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39226351

ABSTRACT

Humans explore visual scenes by alternating short fixations with saccades directing the fovea to points of interest. During fixation, the visual system not only examines the foveal stimulus at high resolution, but it also processes the extrafoveal input to plan the next saccade. Although foveal analysis and peripheral selection occur in parallel, little is known about the temporal dynamics of foveal and peripheral processing upon saccade landing, during fixation. Here we investigate whether the ability to localize changes across the visual field differs depending on when the change occurs during fixation, and on whether the change localization involves foveal, extrafoveal processing, or both. Our findings reveal that the ability to localize changes in peripheral areas of the visual field improves as a function of time after fixation onset, whereas localization accuracy for foveal stimuli remains approximately constant. Importantly, this pattern holds regardless of whether individuals monitor only foveal or peripheral stimuli, or both simultaneously. Altogether, these results show that the visual system is more attuned to the foveal input early on during fixation, whereas change localization for peripheral stimuli progressively improves throughout fixation, possibly as a consequence of an increased readiness to plan the next saccade.


Subject(s)
Fixation, Ocular , Fovea Centralis , Saccades , Visual Fields , Humans , Fixation, Ocular/physiology , Fovea Centralis/physiology , Saccades/physiology , Male , Female , Adult , Visual Fields/physiology , Young Adult , Photic Stimulation/methods , Visual Perception/physiology
17.
Curr Biol ; 34(18): R866-R868, 2024 Sep 23.
Article in English | MEDLINE | ID: mdl-39317159

ABSTRACT

Mosquitoes are notorious for swarming. A new study shows that multi-sensory integration, in particular the way that male mosquitoes' behavioural responses to visual stimuli are modulated by female flight tones, plays a key part in this swarming behaviour.


Subject(s)
Culicidae , Animals , Female , Male , Culicidae/physiology , Visual Perception/physiology , Flight, Animal/physiology , Auditory Perception/physiology
18.
Curr Biol ; 34(18): 4184-4196.e7, 2024 Sep 23.
Article in English | MEDLINE | ID: mdl-39255789

ABSTRACT

Human primary visual cortex (V1) responds more strongly, or resonates, when exposed to ∼10, ∼15-20, and ∼40-50 Hz rhythmic flickering light. Full-field flicker also evokes the perception of hallucinatory geometric patterns, which mathematical models explain as standing-wave formations emerging from periodic forcing at resonant frequencies of the simulated neural network. However, empirical evidence for such flicker-induced standing waves in the visual cortex was missing. We recorded cortical responses to flicker in awake mice using high-spatial-resolution widefield imaging in combination with high-temporal-resolution glutamate-sensing fluorescent reporter (iGluSnFR). The temporal frequency tuning curves in the mouse V1 were similar to those observed in humans, showing a banded structure with multiple resonance peaks (8, 15, and 33 Hz). Spatially, all flicker frequencies evoked responses in V1 corresponding to retinotopic stimulus location, but some evoked additional peaks. These flicker-induced cortical patterns displayed standing-wave characteristics and matched linear wave equation solutions in an area restricted to the visual cortex. Taken together, the interaction of periodic traveling waves with cortical area boundaries leads to spatiotemporal activity patterns that may affect perception.


Subject(s)
Primary Visual Cortex , Animals , Mice , Primary Visual Cortex/physiology , Male , Photic Stimulation , Mice, Inbred C57BL , Female , Visual Perception/physiology , Visual Cortex/physiology
19.
eNeuro ; 11(9)2024 Sep.
Article in English | MEDLINE | ID: mdl-39260892

ABSTRACT

Conscious reportability of visual input is associated with a bimodal neural response in the primary visual cortex (V1): an early-latency response coupled to stimulus features and a late-latency response coupled to stimulus report or detection. This late wave of activity, central to major theories of consciousness, is thought to be driven by the prefrontal cortex (PFC), responsible for "igniting" it. Here we analyzed two electrophysiological studies in mice performing different stimulus detection tasks and characterized neural activity profiles in three key cortical regions: V1, posterior parietal cortex (PPC), and PFC. We then developed a minimal network model, constrained by known connectivity between these regions, reproducing the spatiotemporal propagation of visual- and report-related activity. Remarkably, while PFC was indeed necessary to generate report-related activity in V1, this occurred only through the mediation of PPC. PPC, and not PFC, had the final veto in enabling the report-related late wave of V1 activity.


Subject(s)
Prefrontal Cortex , Animals , Prefrontal Cortex/physiology , Male , Mice, Inbred C57BL , Parietal Lobe/physiology , Photic Stimulation/methods , Mice , Models, Neurological , Primary Visual Cortex/physiology , Visual Perception/physiology , Visual Cortex/physiology , Female , Neurons/physiology , Feedback, Physiological/physiology
20.
J Vis ; 24(9): 12, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39287596

ABSTRACT

Numerals, that is, semantic expressions of numbers, enable us to have an exact representation of the amount of things. Visual processing of numerals plays an indispensable role in the recognition and interpretation of numbers. Here, we investigate how visual information from numerals is processed to achieve semantic understanding. We first found that partial occlusion of some digital numerals introduces bistable interpretations. Next, by using the visual adaptation method, we investigated the origin of this bistability in human participants. We showed that adaptation to digital and normal Arabic numerals, as well as homologous shapes, but not Chinese numerals, biases the interpretation of a partially occluded digital numeral. We suggest that this bistable interpretation is driven by intermediate shape processing stages of vision, that is, by features more complex than local visual orientations, but more basic than the abstract concepts of numerals.


Subject(s)
Photic Stimulation , Humans , Photic Stimulation/methods , Male , Female , Young Adult , Form Perception/physiology , Adult , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Semantics , Mathematics
SELECTION OF CITATIONS
SEARCH DETAIL