Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 68.434
Filtrer
1.
Sci Rep ; 14(1): 17797, 2024 08 01.
Article de Anglais | MEDLINE | ID: mdl-39090337

RÉSUMÉ

Individuals exhibit massive variability in general cognitive skills that affect language processing. This variability is partly developmental. Here, we recruited a large sample of participants (N = 487), ranging from 9 to 90 years of age, and examined the involvement of nonverbal processing speed (assessed using visual and auditory reaction time tasks) and working memory (assessed using forward and backward Digit Span tasks) in a visual world task. Participants saw two objects on the screen and heard a sentence that referred to one of them. In half of the sentences, the target object could be predicted based on verb-selectional restrictions. We observed evidence for anticipatory processing on predictable compared to non-predictable trials. Visual and auditory processing speed had main effects on sentence comprehension and facilitated predictive processing, as evidenced by an interaction. We observed only weak evidence for the involvement of working memory in predictive sentence comprehension. Age had a nonlinear main effect (younger adults responded faster than children and older adults), but it did not differentially modulate predictive and non-predictive processing, nor did it modulate the involvement of processing speed and working memory. Our results contribute to delineating the cognitive skills that are involved in language-vision interactions.


Sujet(s)
Cognition , Compréhension , Individualité , Mémoire à court terme , Temps de réaction , Humains , Adulte , Enfant , Femelle , Mâle , Adolescent , Compréhension/physiologie , Cognition/physiologie , Adulte d'âge moyen , Sujet âgé , Mémoire à court terme/physiologie , Jeune adulte , Sujet âgé de 80 ans ou plus , Temps de réaction/physiologie , Langage , Perception visuelle/physiologie , Linguistique
2.
Cereb Cortex ; 34(8)2024 Aug 01.
Article de Anglais | MEDLINE | ID: mdl-39110411

RÉSUMÉ

Speech perception requires the binding of spatiotemporally disjoint auditory-visual cues. The corresponding brain network-level information processing can be characterized by two complementary mechanisms: functional segregation which refers to the localization of processing in either isolated or distributed modules across the brain, and integration which pertains to cooperation among relevant functional modules. Here, we demonstrate using functional magnetic resonance imaging recordings that subjective perceptual experience of multisensory speech stimuli, real and illusory, are represented in differential states of segregation-integration. We controlled the inter-subject variability of illusory/cross-modal perception parametrically, by introducing temporal lags in the incongruent auditory-visual articulations of speech sounds within the McGurk paradigm. The states of segregation-integration balance were captured using two alternative computational approaches. First, the module responsible for cross-modal binding of sensory signals defined as the perceptual binding network (PBN) was identified using standardized parametric statistical approaches and their temporal correlations with all other brain areas were computed. With increasing illusory perception, the majority of the nodes of PBN showed decreased cooperation with the rest of the brain, reflecting states of high segregation but reduced global integration. Second, using graph theoretic measures, the altered patterns of segregation-integration were cross-validated.


Sujet(s)
Encéphale , Imagerie par résonance magnétique , Perception de la parole , Perception visuelle , Humains , Encéphale/physiologie , Encéphale/imagerie diagnostique , Mâle , Femelle , Adulte , Jeune adulte , Perception de la parole/physiologie , Perception visuelle/physiologie , Cartographie cérébrale , Stimulation acoustique , Réseau nerveux/physiologie , Réseau nerveux/imagerie diagnostique , Stimulation lumineuse/méthodes , Illusions/physiologie , Voies nerveuses/physiologie , Perception auditive/physiologie
3.
PLoS One ; 19(8): e0308295, 2024.
Article de Anglais | MEDLINE | ID: mdl-39102395

RÉSUMÉ

Film cognition explores the influence of cinematic elements, such as editing and film color, on viewers' perception. The Kuleshov effect, a famous example of how editing influences viewers' emotional perception, was initially proposed to support montage theory through the Kuleshov experiment. This effect, which has since been recognized as a manifestation of point-of-view (POV) editing practices, posits that the emotional interpretation of neutral facial expressions is influenced by the accompanying emotional scene in a face-scene-face sequence. However, concerns persist regarding the validity of previous studies, often employing inauthentic film materials like static images, leaving the question of its existence in authentic films unanswered. This study addresses these concerns by utilizing authentic films in two experiments. In Experiment 1, multiple film clips were captured under the guidance of a professional film director and seamlessly integrated into authentic film sequences. 59 participants viewed these face-scene-face film sequences and were tasked with rating the valence and emotional intensity of neutral faces. The findings revealed that the accompanying fearful or happy scenes significantly influence the interpretation of emotion on neutral faces, eliciting perceptions of negative or positive emotions from the neutral face. These results affirm the existence of the Kuleshov effect within authentic films. In Experiment 2, 31 participants rated the valence and arousal of neutral faces while undergoing functional magnetic resonance imaging (fMRI). The behavioral results confirm the Kuleshov effect in the MRI scanner, while the neural data identify neural correlates that support its existence at the neural level. These correlates include the cuneus, precuneus, hippocampus, parahippocampal gyrus, post cingulate gyrus, orbitofrontal cortex, fusiform gyrus, and insula. These findings also underscore the contextual framing inherent in the Kuleshov effect. Overall, the study integrates film theory and cognitive neuroscience experiments, providing robust evidence supporting the existence of the Kuleshov effect through both subjective ratings and objective neuroimaging measurements. This research also contributes to a deeper understanding of the impact of film editing on viewers' emotional perception from the contemporary POV editing practices and neurocinematic perspective, advancing the knowledge of film cognition.


Sujet(s)
Émotions , Expression faciale , Imagerie par résonance magnétique , Films , Humains , Émotions/physiologie , Femelle , Mâle , Adulte , Jeune adulte , Encéphale/physiologie , Encéphale/imagerie diagnostique , Cartographie cérébrale , Stimulation lumineuse , Perception visuelle/physiologie
4.
PLoS One ; 19(8): e0306271, 2024.
Article de Anglais | MEDLINE | ID: mdl-39110701

RÉSUMÉ

Music is omnipresent in daily life and may interact with critical cognitive processes including memory. Despite music's presence during diverse daily activities including studying, commuting, or working, existing literature has yielded mixed results as to whether music improves or impairs memory for information experienced in parallel. To elucidate how music memory and its predictive structure modulate the encoding of novel information, we developed a cross-modal sequence learning task during which participants acquired sequences of abstract shapes accompanied with paired music. Our goal was to investigate whether familiar and structurally regular music could provide a "temporal schema" (rooted in the organized and hierarchical structure of music) to enhance the acquisition of parallel temporally-ordered visual information. Results revealed a complex interplay between music familiarity and music structural regularity in learning paired visual sequences. Notably, compared to a control condition, listening to well-learned, regularly-structured music (music with high predictability) significantly facilitated visual sequence encoding, yielding quicker learning and retrieval speed. Conversely, learned but irregular music (where music memory violated musical syntax) significantly impaired sequence encoding. While those findings supported our mechanistic framework, intriguingly, unlearned irregular music-characterized by the lowest predictability-also demonstrated memory enhancement. In conclusion, this study demonstrates that concurrent music can modulate visual sequence learning, and the effect varies depending on the interaction between both music familiarity and regularity, offering insights into potential applications for enhancing human memory.


Sujet(s)
Musique , , Humains , Musique/psychologie , Femelle , Mâle , /physiologie , Jeune adulte , Adulte , Apprentissage/physiologie , Perception auditive/physiologie , Perception visuelle/physiologie , Mémoire/physiologie , Stimulation acoustique , Stimulation lumineuse
5.
Elife ; 132024 Aug 13.
Article de Anglais | MEDLINE | ID: mdl-39136204

RÉSUMÉ

A neural signature of serial dependence has been found, which mirrors the attractive bias of visual information seen in behavioral experiments.


Sujet(s)
Perception visuelle , Humains , Animaux , Perception visuelle/physiologie
6.
Proc Biol Sci ; 291(2028): 20240865, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-39137890

RÉSUMÉ

Many animals rely on visual camouflage to avoid detection and increase their chances of survival. Edge disruption is commonly seen in the natural world, with animals evolving high-contrast markings that are incongruent with their real body outline in order to avoid recognition. While many studies have investigated how camouflage properties influence viewer performance and eye movement in predation search tasks, researchers in the field have yet to consider how camouflage may directly modulate visual attention and object processing. To examine how disruptive coloration modulates attention, we use a visual object recognition model to quantify object saliency. We determine if object saliency is predictive of human behavioural performance and subjective certainty, as well as neural signatures of attention and decision-making. We show that increasing edge disruption not only reduces detection and identification performance but is also associated with a dampening of neurophysiological signatures of attentional filtering. Increased self-reported certainty regarding decisions corresponds with neurophysiological signatures of evidence accumulation and decision-making. In summary, we have demonstrated a potential mechanism by which edge disruption increases the evolutionary fitness of animals by reducing the brain's ability to distinguish signal from noise, and hence to detect and identify the camouflaged animal.


Sujet(s)
Attention , Prise de décision , Animaux , Humains , Perception visuelle , Mimétisme biologique , Mâle
7.
Sci Rep ; 14(1): 18789, 2024 08 13.
Article de Anglais | MEDLINE | ID: mdl-39138248

RÉSUMÉ

Motor contagions refer to implicit effects induced by the observation of actions made by others on one's own actions. A plethora of studies conducted over the last two decades have demonstrated that both observed and predicted actions can induce various kinds of motor contagions in a human observer. However, motor contagions have always been investigated with regard to different features of an observed action, and it remains unclear whether the background environment in which an observed action takes place modulates motor contagions as well. Here, we investigated participant movements in an empirical hand steering task during which the participants were required to move a cursor through a visual channel after being presented with videos of an actor performing the same task. We manipulated the congruency between the actions shown in the video and the background channels and examined whether and how they affected the participants' own movements. We observed a clear interaction between the observed action and its background. The movement time of the participants' actions tended to increase or decrease depending on whether they observed a faster or slower movement, respectively, and these changes were amplified if the background was not congruent with the action contained within it. These results suggest that background information can modulate motor contagions in humans.


Sujet(s)
Mouvement , Performance psychomotrice , Humains , Mâle , Femelle , Adulte , Mouvement/physiologie , Jeune adulte , Performance psychomotrice/physiologie , Perception visuelle/physiologie , Main/physiologie , Stimulation lumineuse
8.
Article de Anglais | MEDLINE | ID: mdl-39102324

RÉSUMÉ

Faces and bodies provide critical cues for social interaction and communication. Their structural encoding depends on configural processing, as suggested by the detrimental effect of stimulus inversion for both faces (i.e., face inversion effect - FIE) and bodies (body inversion effect - BIE). An occipito-temporal negative event-related potential (ERP) component peaking around 170 ms after stimulus onset (N170) is consistently elicited by human faces and bodies and is affected by the inversion of these stimuli. Albeit it is known that emotional expressions can boost structural encoding (resulting in larger N170 components for emotional than for neutral faces), little is known about body emotional expressions. Thus, the current study investigated the effects of different emotional expressions on structural encoding in combination with FIE and BIE. Three ERP components (P1, N170, P2) were recorded using a 128-channel electroencephalogram (EEG) when participants were presented with (upright and inverted) faces and bodies conveying four possible emotions (happiness, sadness, anger, fear) or no emotion (neutral). Results demonstrated that inversion and emotional expressions independently affected the Accuracy and amplitude of all ERP components (P1, N170, P2). In particular, faces showed specific effects of emotional expressions during the structural encoding stage (N170), while P2 amplitude (representing top-down conceptualisation) was modified by emotional body perception. Moreover, the task performed by participants (i.e., implicit vs. explicit processing of emotional information) differently influenced Accuracy and ERP components. These results support integrated theories of visual perception, thus speaking in favour of the functional independence of the two neurocognitive pathways (one for structural encoding and one for emotional expression analysis) involved in social stimuli processing. Results are discussed highlighting the neurocognitive and computational advantages of the independence between the two pathways.


Sujet(s)
Électroencéphalographie , Émotions , Potentiels évoqués , Expression faciale , Humains , Mâle , Émotions/physiologie , Femelle , Jeune adulte , Adulte , Potentiels évoqués/physiologie , Reconnaissance faciale/physiologie , Stimulation lumineuse , Perception visuelle/physiologie , Kinésique
9.
Nat Commun ; 15(1): 6885, 2024 Aug 11.
Article de Anglais | MEDLINE | ID: mdl-39128923

RÉSUMÉ

When multiple visual stimuli are presented simultaneously in the receptive field, the neural response is suppressed compared to presenting the same stimuli sequentially. The prevailing hypothesis suggests that this suppression is due to competition among multiple stimuli for limited resources within receptive fields, governed by task demands. However, it is unknown how stimulus-driven computations may give rise to simultaneous suppression. Using fMRI, we find simultaneous suppression in single voxels, which varies with both stimulus size and timing, and progressively increases up the visual hierarchy. Using population receptive field (pRF) models, we find that compressive spatiotemporal summation rather than compressive spatial summation predicts simultaneous suppression, and that increased simultaneous suppression is linked to larger pRF sizes and stronger compressive nonlinearities. These results necessitate a rethinking of simultaneous suppression as the outcome of stimulus-driven compressive spatiotemporal computations within pRFs, and open new opportunities to study visual processing capacity across space and time.


Sujet(s)
Imagerie par résonance magnétique , Stimulation lumineuse , Cortex visuel , Humains , Cortex visuel/physiologie , Cortex visuel/imagerie diagnostique , Mâle , Femelle , Adulte , Perception visuelle/physiologie , Jeune adulte , Champs visuels/physiologie , Cartographie cérébrale , Modèles neurologiques
10.
J Vis ; 24(8): 3, 2024 Aug 01.
Article de Anglais | MEDLINE | ID: mdl-39102229

RÉSUMÉ

Visual perception involves binding of distinct features into a unified percept. Although traditional theories link feature binding to time-consuming recurrent processes, Holcombe and Cavanagh (2001) demonstrated ultrafast, early binding of features that belong to the same object. The task required binding of orientation and luminance within an exceptionally short presentation time. However, because visual stimuli were presented over multiple presentation cycles, their findings can alternatively be explained by temporal integration over the extended stimulus sequence. Here, we conducted three experiments manipulating the number of presentation cycles. If early binding occurs, one extremely short cycle should be sufficient for feature integration. Conversely, late binding theories predict that successful binding requires substantial time and improves with additional presentation cycles. Our findings indicate that task-relevant binding of features from the same object occurs slowly, supporting late binding theories.


Sujet(s)
Stimulation lumineuse , Humains , Stimulation lumineuse/méthodes , Facteurs temps , Mâle , Jeune adulte , Adulte , Femelle , Perception visuelle/physiologie , Orientation/physiologie
11.
J Psychopharmacol ; 38(8): 724-734, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-39087306

RÉSUMÉ

BACKGROUND: Cannabis is the most widely used psychoactive drug in the United States. While multiple studies have associated acute cannabis consumption with alterations in cognitive function (e.g., visual and spatial attention), far less is known regarding the effects of chronic consumption on the neural dynamics supporting these cognitive functions. METHODS: We used magnetoencephalography (MEG) and an established visuospatial processing task to elicit multi-spectral neuronal responses in 44 regular cannabis users and 53 demographically matched non-user controls. To examine the effects of chronic cannabis use on the oscillatory dynamics underlying visuospatial processing, neural responses were imaged using a time-frequency resolved beamformer and compared across groups. RESULTS: Neuronal oscillations serving visuospatial processing were identified in the theta (4-8 Hz), alpha (8-14 Hz), and gamma range (56-76 Hz), and these were imaged and examined for group differences. Our key results indicated that users exhibited weaker theta oscillations in occipital and cerebellar regions and weaker gamma responses in the left temporal cortices compared to non-users. Lastly, alpha oscillations did not differ, but alpha connectivity among higher-order attention areas was weaker in cannabis users relative to non-users and correlated with performance. CONCLUSIONS: Overall, these results suggest that chronic cannabis users have alterations in the oscillatory dynamics and neural connectivity serving visuospatial attention. Such alterations were observed across multiple cortical areas critical for higher-order processing and may reflect compensatory activity and/or the initial emergence of aberrant dynamics. Future work is needed to fully understand the implications of altered multispectral oscillations and neural connectivity in cannabis users.


Sujet(s)
Attention , Magnétoencéphalographie , Humains , Mâle , Femelle , Adulte , Jeune adulte , Attention/effets des médicaments et des substances chimiques , Attention/physiologie , Perception visuelle/physiologie , Perception visuelle/effets des médicaments et des substances chimiques , Encéphale/physiopathologie , Encéphale/effets des médicaments et des substances chimiques , Cognition/effets des médicaments et des substances chimiques , Cognition/physiologie
12.
J Vis ; 24(8): 2, 2024 Aug 01.
Article de Anglais | MEDLINE | ID: mdl-39087936

RÉSUMÉ

The Corsi (block-tapping) paradigm is a classic and well-established visuospatial working memory task in humans involving internal computations (memorizing of item sequences, organizing and updating the memorandum, and recall processes), as well as both overt and covert shifts of attention to facilitate rehearsal, serving to maintain the Corsi sequences during the retention phase. Here, we introduce a novel digital version of a Corsi task in which i) the difficulty of the memorandum (using sequence lengths ranging from 3 to 8) was controlled, ii) the execution of overt and/or covert attention as well as the visuospatial working memory load during the retention phase was manipulated, and iii) shifts of attention were quantified in all experimental phases. With this, we present behavioral data that demonstrate, characterize, and classify the individual effects of overt and covert strategies used as a means of encoding and rehearsal. In a full within-subject design, we tested 28 participants who had to solve three different Corsi conditions. While in condition A neither of the two strategies were restricted, in condition B the overt and in condition C the overt as well as the covert strategies were suppressed. Analyzing Corsi span, (eye) exploration index, and pupil size (change), data clearly show a continuum between overt and covert strategies over all participants (indicating inter-individual variability). Further, all participants showed stable strategy choice (indicating intra-individual stability), meaning that the preferred strategy was maintained in all three conditions, phases, and sequence lengths of the experiment.


Sujet(s)
Attention , Mémoire à court terme , Humains , Mâle , Mémoire à court terme/physiologie , Femelle , Attention/physiologie , Adulte , Jeune adulte , Stimulation lumineuse/méthodes , Perception de l'espace/physiologie , Performance psychomotrice/physiologie , Perception visuelle/physiologie
13.
J Exp Psychol Gen ; 153(8): 2127-2141, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-39101910

RÉSUMÉ

Tools enable humans to extend their sensing abilities beyond the natural limits of their hands, allowing them to sense objects as if they were using their hands directly. The similarities between direct hand interactions with objects (hand-based sensing) and the ability to extend sensory information processing beyond the hand (tool-mediated sensing) entail the existence of comparable processes for integrating tool- and hand-sensed information with vision, raising the question of whether tools support vision in bimanual object manipulations. Here, we investigated participants' performance while grasping objects either held with a tool or with their hand and compared these conditions with visually guided grasping (Experiment 1). By measuring reaction time, peak velocity, and peak of grip aperture, we found that actions were initiated earlier and performed with a smaller peak grip aperture when the object was seen and held with the tool or the contralateral hand compared to when it was only seen. Thus, tool-mediated sensing effectively supports vision in multisensory grasping and, even more intriguingly, resembles hand-based sensing. We excluded that results were due to the force exerted on the tool's handle (Experiment 2). Additionally, as for hand-based sensing, we found evidence that the tool supports vision by mainly providing object positional information (Experiment 3). Thus, integrating the tool-sensed position of the object with vision is sufficient to promote a multisensory advantage in grasping. Our findings indicate that multisensory integration mechanisms significantly improve grasping actions and fine-tune contralateral hand movements even when object information is only indirectly sensed through a tool. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Sujet(s)
Force de la main , Main , Performance psychomotrice , Perception visuelle , Humains , Mâle , Femelle , Adulte , Performance psychomotrice/physiologie , Force de la main/physiologie , Jeune adulte , Perception visuelle/physiologie , Main/physiologie , Temps de réaction/physiologie
14.
Anim Cogn ; 27(1): 56, 2024 Aug 13.
Article de Anglais | MEDLINE | ID: mdl-39136822

RÉSUMÉ

Recent research suggests that socio-ecological factors such as dietary specialization and social complexity may be drivers of advanced cognitive skills among primates. Therefore, we assessed the ability of 12 black-handed spider monkeys (Ateles geoffroyi), a highly frugivorous platyrrhine primate with strong fission-fusion dynamics, to succeed in a serial visual reversal learning task. Using a two-alternative choice paradigm we first trained the animals to reliably choose a rewarded visual stimulus over a non-rewarded one. Upon reaching a pre-set learning criterion we then switched the reward values of the two stimuli and assessed if and how quickly the animals learned to reverse their choices, again to a pre-set learning criterion. This stimulus reversal procedure was then continued for a total of 80 sessions of 10 trials each. We found that the spider monkeys quickly learned to reliably discriminate between two simultaneously presented visual stimuli, that they succeeded in a visual reversal learning task, and that they displayed an increase in learning speed across consecutive reversals, suggesting that they are capable of serial reversal learning-set formation with visual cues. The fastest-learning individual completed five reversals within the 80 sessions. The spider monkeys outperformed most other primate and nonprimate mammal species tested so far on this type of cognitive task, including chimpanzees, with regard to their learning speed in both the initial learning task and in the first reversal task, suggesting a high degree of behavioral flexibility and inhibitory control. Our findings support the notion that socio-ecological factors such as dietary specialization and social complexity foster advanced cognitive skills in primates.


Sujet(s)
Apprentissage inversé , Animaux , Mâle , Femelle , Ateles geoffroyi , Perception visuelle , Récompense , Apprentissage sériel , Atelinae/physiologie
15.
Sci Rep ; 14(1): 18298, 2024 08 07.
Article de Anglais | MEDLINE | ID: mdl-39112629

RÉSUMÉ

Hand visibility affects motor control, perception, and attention, as visual information is integrated into an internal model of somatomotor control. Spontaneous brain activity, i.e., at rest, in the absence of an active task, is correlated among somatomotor regions that are jointly activated during motor tasks. Recent studies suggest that spontaneous activity patterns not only replay task activation patterns but also maintain a model of the body's and environment's statistical regularities (priors), which may be used to predict upcoming behavior. Here, we test whether spontaneous activity in the human somatomotor cortex as measured using fMRI is modulated by visual stimuli that display hands vs. non-hand stimuli and by the use/action they represent. A multivariate pattern analysis was performed to examine the similarity between spontaneous activity patterns and task-evoked patterns to the presentation of natural hands, robot hands, gloves, or control stimuli (food). In the left somatomotor cortex, we observed a stronger (multivoxel) spatial correlation between resting state activity and natural hand picture patterns compared to other stimuli. No task-rest similarity was found in the visual cortex. Spontaneous activity patterns in somatomotor brain regions code for the visual representation of human hands and their use.


Sujet(s)
Cartographie cérébrale , Main , Imagerie par résonance magnétique , Perception visuelle , Humains , Main/physiologie , Mâle , Femelle , Adulte , Perception visuelle/physiologie , Jeune adulte , Encéphale/physiologie , Encéphale/imagerie diagnostique , Cortex moteur/physiologie , Cortex moteur/imagerie diagnostique , Repos/physiologie , Stimulation lumineuse , Cortex visuel/physiologie , Cortex visuel/imagerie diagnostique
16.
J Vis ; 24(8): 4, 2024 Aug 01.
Article de Anglais | MEDLINE | ID: mdl-39110584

RÉSUMÉ

Across the visual periphery, perceptual and metacognitive abilities differ depending on the locus of visual attention, the location of peripheral stimulus presentation, the task design, and many other factors. In this investigation, we aimed to illuminate the relationship between attention and eccentricity in the visual periphery by estimating perceptual sensitivity, metacognitive sensitivity, and response biases across the visual field. In a 2AFC detection task, participants were asked to determine whether a signal was present or absent at one of eight peripheral locations (±10°, 20°, 30°, and 40°), using either a valid or invalid attentional cue. As expected, results revealed that perceptual sensitivity declined with eccentricity and was modulated by attention, with higher sensitivity on validly cued trials. Furthermore, a significant main effect of eccentricity on response bias emerged, with variable (but relatively unbiased) c'a values from 10° to 30°, and conservative c'a values at 40°. Regarding metacognitive sensitivity, significant main effects of attention and eccentricity were found, with metacognitive sensitivity decreasing with eccentricity, and decreasing in the invalid cue condition. Interestingly, metacognitive efficiency, as measured by the ratio of meta-d'a/d'a, was not modulated by attention or eccentricity. Overall, these findings demonstrate (1) that in some circumstances, observers have surprisingly robust metacognitive insights into how performance changes across the visual field and (2) that the periphery may be subject to variable detection biases that are contingent on the exact location in peripheral space.


Sujet(s)
Attention , Signaux , Métacognition , Stimulation lumineuse , Champs visuels , Humains , Champs visuels/physiologie , Attention/physiologie , Mâle , Jeune adulte , Femelle , Adulte , Stimulation lumineuse/méthodes , Métacognition/physiologie , Perception visuelle/physiologie
17.
J Vis ; 24(8): 5, 2024 Aug 01.
Article de Anglais | MEDLINE | ID: mdl-39110583

RÉSUMÉ

Contextual cueing is a phenomenon of visual statistical learning observed in visual search tasks. Previous research has found that the degree of deviation of items from its centroid, known as variability, determines the extent of generalization for that repeated scene. Introducing variability increases dissimilarity between multiple occurrences of the same repeated layout significantly. However, current theories do not explain the mechanisms that help to overcome this dissimilarity during contextual cue learning. We propose that the cognitive system initially abstracts specific scenes into scene layouts through an automatic clustering unrelated to specific repeated scenes, and subsequently uses these abstracted scene layouts for contextual cue learning. Experiment 1 indicates that introducing greater variability in search scenes leads to a hindering in the contextual cue learning. Experiment 2 further establishes that conducting extensive visual searches involving spatial variability in entirely novel scenes facilitates subsequent contextual cue learning involving corresponding scene variability, confirming that learning clustering knowledge precedes the contextual cue learning and is independent of specific repeated scenes. Overall, this study demonstrates the existence of multiple levels of learning in visual statistical learning, where item-level learning can serve as material for layout-level learning, and the generalization reflects the constraining role of item-level knowledge on layout-level knowledge.


Sujet(s)
Signaux , Humains , Stimulation lumineuse/méthodes , Apprentissage/physiologie , Jeune adulte , Mâle , Femelle , Reconnaissance visuelle des formes/physiologie , Perception visuelle/physiologie , Adulte , Analyse de regroupements , Attention/physiologie
18.
JASA Express Lett ; 4(8)2024 Aug 01.
Article de Anglais | MEDLINE | ID: mdl-39140831

RÉSUMÉ

Previous research suggests that noise sensitivity is related to inefficient auditory processing that might increase the mental load of noise and affect noise evaluation. This assumption was tested in an experiment using a dual-task paradigm with a visual primary task and an auditory secondary task. Results showed that participants' noise sensitivity was positively correlated with mental effort. Furthermore, mental effort mediated the effect of noise sensitivity on loudness and unpleasantness ratings. The results thus support the idea that noise sensitivity is related to increased mental effort and difficulties in filtering auditory information and that situational factors should be considered.


Sujet(s)
Perception auditive , Bruit , Humains , Bruit/effets indésirables , Mâle , Femelle , Adulte , Perception auditive/physiologie , Jeune adulte , Perception sonore/physiologie , Perception visuelle/physiologie , Stimulation acoustique
19.
Commun Biol ; 7(1): 965, 2024 Aug 09.
Article de Anglais | MEDLINE | ID: mdl-39122960

RÉSUMÉ

Predictive coding theory suggests the brain anticipates sensory information using prior knowledge. While this theory has been extensively researched within individual sensory modalities, evidence for predictive processing across sensory modalities is limited. Here, we examine how crossmodal knowledge is represented and learned in the brain, by identifying the hierarchical networks underlying crossmodal predictions when information of one sensory modality leads to a prediction in another modality. We record electroencephalogram (EEG) during a crossmodal audiovisual local-global oddball paradigm, in which the predictability of transitions between tones and images are manipulated at both the stimulus and sequence levels. To dissect the complex predictive signals in our EEG data, we employed a model-fitting approach to untangle neural interactions across modalities and hierarchies. The model-fitting result demonstrates that audiovisual integration occurs at both the levels of individual stimulus interactions and multi-stimulus sequences. Furthermore, we identify the spatio-spectro-temporal signatures of prediction-error signals across hierarchies and modalities, and reveal that auditory and visual prediction errors are rapidly redirected to the central-parietal electrodes during learning through alpha-band interactions. Our study suggests a crossmodal predictive coding mechanism where unimodal predictions are processed by distributed brain networks to form crossmodal knowledge.


Sujet(s)
Perception auditive , Encéphale , Électroencéphalographie , Perception visuelle , Humains , Encéphale/physiologie , Perception auditive/physiologie , Perception visuelle/physiologie , Mâle , Femelle , Adulte , Jeune adulte , Stimulation acoustique , Stimulation lumineuse
20.
PLoS One ; 19(8): e0306736, 2024.
Article de Anglais | MEDLINE | ID: mdl-39088399

RÉSUMÉ

The label-feedback hypothesis states that language can modulate visual processing. In particular, hearing or reading aloud target names (labels) speeds up performance in visual search tasks by facilitating target detection and such advantage is often measured against a condition where the target name is shown visually (i.e. via the same modality as the search task). The current study conceptually complements and expands previous investigations. The effect of a multimodal label presentation (i.e., an audio+visual, AV, priming label) in a visual search task is compared to that of a multimodal (i.e. white noise+visual, NV, label) and two unimodal (i.e. audio, A, label or visual, V, label) control conditions. The name of a category (i.e. a label at the superordinate level) is used as a cue, instead of the more commonly used target name (a basic level label), with targets belonging to one of three categories: garments, improper weapons, and proper weapons. These categories vary for their structure, improper weapons being an ad hoc category (i.e. context-dependent), unlike proper weapons and garments. The preregistered analysis shows an overall facilitation of visual search performance in the AV condition compared to the NV condition, confirming that the label-feedback effect may not be explained away by the effects of multimodal stimulation only and that it extends to superordinate labels. Moreover, exploratory analyses show that such facilitation is driven by the garments and proper weapons categories, rather than improper weapons. Thus, the superordinate label-feedback effect is modulated by the structural properties of a category. These findings are consistent with the idea that the AV condition prompts an "up-regulation" of the label, a requirement for enhancing the label's beneficial effects, but not when the label refers to an ad hoc category. They also highlight the peculiar status of the category of improper weapons and set it apart from that of proper weapons.


Sujet(s)
Perception visuelle , Humains , Femelle , Perception visuelle/physiologie , Mâle , Adulte , Jeune adulte , Temps de réaction/physiologie , Stimulation lumineuse , Langage , Lecture
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE