Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 747
Filter
Add more filters

Publication year range
1.
Annu Rev Neurosci ; 47(1): 255-276, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38663429

ABSTRACT

The zebrafish visual system has become a paradigmatic preparation for behavioral and systems neuroscience. Around 40 types of retinal ganglion cells (RGCs) serve as matched filters for stimulus features, including light, optic flow, prey, and objects on a collision course. RGCs distribute their signals via axon collaterals to 12 retinorecipient areas in forebrain and midbrain. The major visuomotor hub, the optic tectum, harbors nine RGC input layers that combine information on multiple features. The retinotopic map in the tectum is locally adapted to visual scene statistics and visual subfield-specific behavioral demands. Tectal projections to premotor centers are topographically organized according to behavioral commands. The known connectivity in more than 20 processing streams allows us to dissect the cellular basis of elementary perceptual and cognitive functions. Visually evoked responses, such as prey capture or loom avoidance, are controlled by dedicated multistation pathways that-at least in the larva-resemble labeled lines. This architecture serves the neuronal code's purpose of driving adaptive behavior.


Subject(s)
Retinal Ganglion Cells , Superior Colliculi , Visual Pathways , Zebrafish , Animals , Visual Pathways/physiology , Zebrafish/physiology , Retinal Ganglion Cells/physiology , Superior Colliculi/physiology , Visual Perception/physiology
2.
Nature ; 631(8020): 378-385, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38961292

ABSTRACT

The execution of goal-oriented behaviours requires a spatially coherent alignment between sensory and motor maps. The current model for sensorimotor transformation in the superior colliculus relies on the topographic mapping of static spatial receptive fields onto movement endpoints1-6. Here, to experimentally assess the validity of this canonical static model of alignment, we dissected the visuo-motor network in the superior colliculus and performed in vivo intracellular and extracellular recordings across layers, in restrained and unrestrained conditions, to assess both the motor and the visual tuning of individual motor and premotor neurons. We found that collicular motor units have poorly defined visual static spatial receptive fields and respond instead to kinetic visual features, revealing the existence of a direct alignment in vectorial space between sensory and movement vectors, rather than between spatial receptive fields and movement endpoints as canonically hypothesized. We show that a neural network built according to these kinetic alignment principles is ideally placed to sustain ethological behaviours such as the rapid interception of moving and static targets. These findings reveal a novel dimension of the sensorimotor alignment process. By extending the alignment from the static to the kinetic domain this work provides a novel conceptual framework for understanding the nature of sensorimotor convergence and its relevance in guiding goal-directed behaviours.


Subject(s)
Models, Neurological , Movement , Superior Colliculi , Visual Perception , Animals , Female , Male , Goals , Kinetics , Motor Neurons/physiology , Movement/physiology , Nerve Net/cytology , Nerve Net/physiology , Photic Stimulation , Psychomotor Performance/physiology , Reproducibility of Results , Superior Colliculi/cytology , Superior Colliculi/physiology , Visual Perception/physiology
3.
Nature ; 629(8014): 1100-1108, 2024 May.
Article in English | MEDLINE | ID: mdl-38778103

ABSTRACT

The rich variety of behaviours observed in animals arises through the interplay between sensory processing and motor control. To understand these sensorimotor transformations, it is useful to build models that predict not only neural responses to sensory input1-5 but also how each neuron causally contributes to behaviour6,7. Here we demonstrate a novel modelling approach to identify a one-to-one mapping between internal units in a deep neural network and real neurons by predicting the behavioural changes that arise from systematic perturbations of more than a dozen neuronal cell types. A key ingredient that we introduce is 'knockout training', which involves perturbing the network during training to match the perturbations of the real neurons during behavioural experiments. We apply this approach to model the sensorimotor transformations of Drosophila melanogaster males during a complex, visually guided social behaviour8-11. The visual projection neurons at the interface between the optic lobe and central brain form a set of discrete channels12, and prior work indicates that each channel encodes a specific visual feature to drive a particular behaviour13,14. Our model reaches a different conclusion: combinations of visual projection neurons, including those involved in non-social behaviours, drive male interactions with the female, forming a rich population code for behaviour. Overall, our framework consolidates behavioural effects elicited from various neural perturbations into a single, unified model, providing a map from stimulus to neuronal cell type to behaviour, and enabling future incorporation of wiring diagrams of the brain15 into the model.


Subject(s)
Brain , Drosophila melanogaster , Models, Neurological , Neurons , Optic Lobe, Nonmammalian , Social Behavior , Visual Perception , Animals , Female , Male , Drosophila melanogaster/physiology , Drosophila melanogaster/cytology , Neurons/classification , Neurons/cytology , Neurons/physiology , Optic Lobe, Nonmammalian/cytology , Optic Lobe, Nonmammalian/physiology , Visual Perception/physiology , Nerve Net/cytology , Nerve Net/physiology , Brain/cytology , Brain/physiology
4.
PLoS Biol ; 22(5): e3002358, 2024 May.
Article in English | MEDLINE | ID: mdl-38768251

ABSTRACT

Neurons responding during action execution and action observation were discovered in the ventral premotor cortex 3 decades ago. However, the visual features that drive the responses of action observation/execution neurons (AOENs) have not been revealed at present. We investigated the neural responses of AOENs in ventral premotor area F5c of 4 macaques during the observation of action videos and crucial control stimuli. The large majority of AOENs showed highly phasic responses during the action videos, with a preference for the moment that the hand made contact with the object. They also responded to an abstract shape moving towards but not interacting with an object, even when the shape moved on a scrambled background, implying that most AOENs in F5c do not require the perception of causality or a meaningful action. Additionally, the majority of AOENs responded to static frames of the videos. Our findings show that very elementary stimuli, even without a grasping context, are sufficient to drive responses in F5c AOENs.


Subject(s)
Motor Cortex , Neurons , Photic Stimulation , Animals , Motor Cortex/physiology , Photic Stimulation/methods , Neurons/physiology , Male , Macaca mulatta/physiology , Visual Perception/physiology , Macaca/physiology
5.
PLoS Biol ; 22(2): e3002494, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38319934

ABSTRACT

Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.


Subject(s)
Brain Mapping , Visual Perception , Humans , Aged , Bayes Theorem , Visual Perception/physiology , Brain/physiology , Attention/physiology , Acoustic Stimulation/methods , Auditory Perception/physiology , Photic Stimulation/methods , Magnetic Resonance Imaging
6.
PLoS Biol ; 22(4): e3002564, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38557761

ABSTRACT

Behavioral and neuroscience studies in humans and primates have shown that memorability is an intrinsic property of an image that predicts its strength of encoding into and retrieval from memory. While previous work has independently probed when or where this memorability effect may occur in the human brain, a description of its spatiotemporal dynamics is missing. Here, we used representational similarity analysis (RSA) to combine functional magnetic resonance imaging (fMRI) with source-estimated magnetoencephalography (MEG) to simultaneously measure when and where the human cortex is sensitive to differences in image memorability. Results reveal that visual perception of High Memorable images, compared to Low Memorable images, recruits a set of regions of interest (ROIs) distributed throughout the ventral visual cortex: a late memorability response (from around 300 ms) in early visual cortex (EVC), inferior temporal cortex, lateral occipital cortex, fusiform gyrus, and banks of the superior temporal sulcus. Image memorability magnitude results are represented after high-level feature processing in visual regions and reflected in classical memory regions in the medial temporal lobe (MTL). Our results present, to our knowledge, the first unified spatiotemporal account of visual memorability effect across the human cortex, further supporting the levels-of-processing theory of perception and memory.


Subject(s)
Brain , Visual Perception , Animals , Humans , Visual Perception/physiology , Brain/physiology , Cerebral Cortex/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Magnetoencephalography/methods , Magnetic Resonance Imaging/methods , Brain Mapping/methods
7.
PLoS Biol ; 22(6): e3002668, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38857283

ABSTRACT

Despite the diverse genetic origins of autism spectrum disorders (ASDs), affected individuals share strikingly similar and correlated behavioural traits that include perceptual and sensory processing challenges. Notably, the severity of these sensory symptoms is often predictive of the expression of other autistic traits. However, the origin of these perceptual deficits remains largely elusive. Here, we show a recurrent impairment in visual threat perception that is similarly impaired in 3 independent mouse models of ASD with different molecular aetiologies. Interestingly, this deficit is associated with reduced avoidance of threatening environments-a nonperceptual trait. Focusing on a common cause of ASDs, the Setd5 gene mutation, we define the molecular mechanism. We show that the perceptual impairment is caused by a potassium channel (Kv1)-mediated hypoexcitability in a subcortical node essential for the initiation of escape responses, the dorsal periaqueductal grey (dPAG). Targeted pharmacological Kv1 blockade rescued both perceptual and place avoidance deficits, causally linking seemingly unrelated trait deficits to the dPAG. Furthermore, we show that different molecular mechanisms converge on similar behavioural phenotypes by demonstrating that the autism models Cul3 and Ptchd1, despite having similar behavioural phenotypes, differ in their functional and molecular alteration. Our findings reveal a link between rapid perception controlled by subcortical pathways and appropriate learned interactions with the environment and define a nondevelopmental source of such deficits in ASD.


Subject(s)
Autism Spectrum Disorder , Avoidance Learning , Disease Models, Animal , Haploinsufficiency , Visual Perception , Animals , Male , Mice , Autism Spectrum Disorder/genetics , Autism Spectrum Disorder/physiopathology , Autistic Disorder/genetics , Autistic Disorder/physiopathology , Avoidance Learning/physiology , Behavior, Animal/physiology , Haploinsufficiency/genetics , Histone-Lysine N-Methyltransferase/genetics , Histone-Lysine N-Methyltransferase/metabolism , Mice, Inbred C57BL , Visual Perception/physiology
8.
PLoS Biol ; 22(7): e3002721, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39008524

ABSTRACT

The abundance of distractors in the world poses a major challenge to our brain's limited processing capacity, but little is known about how selective attention modulates stimulus representations in the brain to reduce interference and support durable target memory. Here, we collected functional magnetic resonance imaging (fMRI) data in a selective attention task in which target and distractor pictures of different visual categories were simultaneously presented. Participants were asked to selectively process the target according to the effective cue, either before the encoding period (i.e., perceptual attention) or the maintenance period (i.e., reflective attention). On the next day, participants were asked to perform a memory recognition task in the scanner in which the targets, distractors, and novel items were presented in a pseudorandom order. Behavioral results showed that perceptual attention was better at enhancing target memory and reducing distractor memory than reflective attention, although the overall memory capacity (memory for both target and distractor) was comparable. Using multiple-voxel pattern analysis of the neural data, we found more robust target representation and weaker distractor representation in working memory for perceptual attention than for reflective attention. Interestingly, perceptual attention partially shifted the regions involved in maintaining the target representation from the visual cortex to the parietal cortex. Furthermore, the targets and distractors simultaneously presented in the perceptual attention condition showed reduced pattern similarity in the parietal cortex during retrieval compared to items not presented together. This neural pattern repulsion positively correlated with individuals' recognition of both targets and distractors. These results emphasize the critical role of selective attention in transforming memory representations to reduce interference and improve long-term memory performance.


Subject(s)
Attention , Magnetic Resonance Imaging , Memory, Long-Term , Memory, Short-Term , Parietal Lobe , Humans , Attention/physiology , Parietal Lobe/physiology , Male , Memory, Short-Term/physiology , Female , Memory, Long-Term/physiology , Adult , Young Adult , Goals , Brain Mapping , Photic Stimulation/methods , Visual Perception/physiology
9.
Proc Natl Acad Sci U S A ; 121(15): e2310291121, 2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38564641

ABSTRACT

Humans blink their eyes frequently during normal viewing, more often than it seems necessary for keeping the cornea well lubricated. Since the closure of the eyelid disrupts the image on the retina, eye blinks are commonly assumed to be detrimental to visual processing. However, blinks also provide luminance transients rich in spatial information to neural pathways highly sensitive to temporal changes. Here, we report that the luminance modulations from blinks enhance visual sensitivity. By coupling high-resolution eye tracking in human observers with modeling of blink transients and spectral analysis of visual input signals, we show that blinking increases the power of retinal stimulation and that this effect significantly enhances visibility despite the time lost in exposure to the external scene. We further show that, as predicted from the spectral content of input signals, this enhancement is selective for stimuli at low spatial frequencies and occurs irrespective of whether the luminance transients are actively generated or passively experienced. These findings indicate that, like eye movements, blinking acts as a computational component of a visual processing strategy that uses motor behavior to reformat spatial information into the temporal domain.


Subject(s)
Blinking , Eye Movements , Humans , Photic Stimulation , Visual Perception/physiology , Vision, Ocular
10.
Proc Natl Acad Sci U S A ; 121(27): e2316608121, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38941277

ABSTRACT

Coordination of goal-directed behavior depends on the brain's ability to recover the locations of relevant objects in the world. In humans, the visual system encodes the spatial organization of sensory inputs, but neurons in early visual areas map objects according to their retinal positions, rather than where they are in the world. How the brain computes world-referenced spatial information across eye movements has been widely researched and debated. Here, we tested whether shifts of covert attention are sufficiently precise in space and time to track an object's real-world location across eye movements. We found that observers' attentional selectivity is remarkably precise and is barely perturbed by the execution of saccades. Inspired by recent neurophysiological discoveries, we developed an observer model that rapidly estimates the real-world locations of objects and allocates attention within this reference frame. The model recapitulates the human data and provides a parsimonious explanation for previously reported phenomena in which observers allocate attention to task-irrelevant locations across eye movements. Our findings reveal that visual attention operates in real-world coordinates, which can be computed rapidly at the earliest stages of cortical processing.


Subject(s)
Attention , Saccades , Humans , Attention/physiology , Saccades/physiology , Adult , Male , Female , Visual Perception/physiology , Visual Fields/physiology , Models, Neurological , Photic Stimulation/methods
11.
Proc Natl Acad Sci U S A ; 121(4): e2317773121, 2024 Jan 23.
Article in English | MEDLINE | ID: mdl-38227668

ABSTRACT

The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to the mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.


Subject(s)
Visual Cortex , Animals , Mice , Visual Cortex/physiology , Visual Perception/physiology , Neural Networks, Computer , Neurons/physiology , Retina/physiology , Photic Stimulation
12.
Proc Natl Acad Sci U S A ; 121(6): e2306937121, 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38285936

ABSTRACT

Visually guided reaching, a regular feature of human life, comprises an intricate neural control task. It includes identifying the target's position in 3D space, passing the representation to the motor system that controls the respective appendages, and adjusting ongoing movements using visual and proprioceptive feedback. Given the complexity of the neural control task, invertebrates, with their numerically constrained central nervous systems, are often considered incapable of this level of visuomotor guidance. Here, we provide mechanistic insights into visual appendage guidance in insects by studying the probing movements of the hummingbird hawkmoth's proboscis as they search for a flower's nectary. We show that visually guided proboscis movements fine-tune the coarse control provided by body movements in flight. By impairing the animals' view of their proboscis, we demonstrate that continuous visual feedback is required and actively sought out to guide this appendage. In doing so, we establish an insect model for the study of neural strategies underlying eye-appendage control in a simple nervous system.


Subject(s)
Movement , Psychomotor Performance , Animals , Humans , Psychomotor Performance/physiology , Movement/physiology , Insecta , Feedback, Sensory/physiology , Visual Perception/physiology
13.
Proc Natl Acad Sci U S A ; 121(35): e2318841121, 2024 Aug 27.
Article in English | MEDLINE | ID: mdl-39172780

ABSTRACT

Visual cortical neurons show variability in their responses to repeated presentations of a stimulus and a portion of this variability is shared across neurons. Attention may enhance visual perception by reducing shared spiking variability. However, shared variability and its attentional modulation are not consistent within or across cortical areas, and depend on additional factors such as neuronal type. A critical factor that has not been tested is actual anatomical connectivity. We measured spike count correlations among pairs of simultaneously recorded neurons in the primary visual cortex (V1) for which anatomical connectivity was inferred from spiking cross-correlations. Neurons were recorded in monkeys performing a contrast-change discrimination task requiring covert shifts in visual spatial attention. Accordingly, spike count correlations were compared across trials in which attention was directed toward or away from the visual stimulus overlapping recorded neuronal receptive fields. Consistent with prior findings, attention did not significantly alter spike count correlations among random pairings of unconnected V1 neurons. However, V1 neurons connected via excitatory synapses showed a significant reduction in spike count correlations with attention. Interestingly, V1 neurons connected via inhibitory synapses demonstrated high spike count correlations overall that were not modulated by attention. Correlated variability in excitatory circuits also depended upon neuronal tuning for contrast, the task-relevant stimulus feature. These results indicate that shared variability depends on the type of connectivity in neuronal circuits. Also, attention significantly reduces shared variability in excitatory circuits, even when attention effects on randomly sampled neurons within the same area are weak.


Subject(s)
Attention , Macaca mulatta , Neurons , Animals , Attention/physiology , Neurons/physiology , Visual Perception/physiology , Visual Cortex/physiology , Male , Photic Stimulation , Primary Visual Cortex/physiology , Action Potentials/physiology , Synapses/physiology
14.
Proc Natl Acad Sci U S A ; 121(24): e2317707121, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38830105

ABSTRACT

Human pose, defined as the spatial relationships between body parts, carries instrumental information supporting the understanding of motion and action of a person. A substantial body of previous work has identified cortical areas responsive to images of bodies and different body parts. However, the neural basis underlying the visual perception of body part relationships has received less attention. To broaden our understanding of body perception, we analyzed high-resolution fMRI responses to a wide range of poses from over 4,000 complex natural scenes. Using ground-truth annotations and an application of three-dimensional (3D) pose reconstruction algorithms, we compared similarity patterns of cortical activity with similarity patterns built from human pose models with different levels of depth availability and viewpoint dependency. Targeting the challenge of explaining variance in complex natural image responses with interpretable models, we achieved statistically significant correlations between pose models and cortical activity patterns (though performance levels are substantially lower than the noise ceiling). We found that the 3D view-independent pose model, compared with two-dimensional models, better captures the activation from distinct cortical areas, including the right posterior superior temporal sulcus (pSTS). These areas, together with other pose-selective regions in the LOTC, form a broader, distributed cortical network with greater view-tolerance in more anterior patches. We interpret these findings in light of the computational complexity of natural body images, the wide range of visual tasks supported by pose structures, and possible shared principles for view-invariant processing between articulated objects and ordinary, rigid objects.


Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Male , Female , Adult , Brain/physiology , Brain/diagnostic imaging , Brain Mapping/methods , Visual Perception/physiology , Posture/physiology , Young Adult , Imaging, Three-Dimensional/methods , Photic Stimulation/methods , Algorithms
15.
Proc Natl Acad Sci U S A ; 121(30): e2320378121, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39008675

ABSTRACT

The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.


Subject(s)
Auditory Perception , Emotions , Magnetic Resonance Imaging , Music , Visual Perception , Humans , Music/psychology , Female , Male , Adult , Visual Perception/physiology , Auditory Perception/physiology , Emotions/physiology , Young Adult , Brain Mapping , Acoustic Stimulation , Visual Cortex/physiology , Visual Cortex/diagnostic imaging , Primary Visual Cortex/physiology , Photic Stimulation/methods
16.
J Neurosci ; 44(11)2024 Mar 13.
Article in English | MEDLINE | ID: mdl-38331581

ABSTRACT

Microsaccades are small, involuntary eye movements that occur during fixation. Their role is debated with recent hypotheses proposing a contribution to automatic scene sampling. Microsaccadic inhibition (MSI) refers to the abrupt suppression of microsaccades, typically evoked within 0.1 s after new stimulus onset. The functional significance and neural underpinnings of MSI are subjects of ongoing research. It has been suggested that MSI is a component of the brain's attentional re-orienting network which facilitates the allocation of attention to new environmental occurrences by reducing disruptions or shifts in gaze that could interfere with processing. The extent to which MSI is reflexive or influenced by top-down mechanisms remains debated. We developed a task that examines the impact of auditory top-down attention on MSI, allowing us to disentangle ocular dynamics from visual sensory processing. Participants (N = 24 and 27; both sexes) listened to two simultaneous streams of tones and were instructed to attend to one stream while detecting specific task "targets." We quantified MSI in response to occasional task-irrelevant events presented in both the attended and unattended streams (frequency steps in Experiment 1, omissions in Experiment 2). The results show that initial stages of MSI are not affected by auditory attention. However, later stages (∼0.25 s postevent onset), affecting the extent and duration of the inhibition, are enhanced for sounds in the attended stream compared to the unattended stream. These findings provide converging evidence for the reflexive nature of early MSI stages and robustly demonstrate the involvement of auditory attention in modulating the later stages.


Subject(s)
Eye Movements , Visual Perception , Male , Female , Humans , Visual Perception/physiology , Sensation , Sound , Auditory Perception/physiology
17.
J Neurosci ; 44(13)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38302441

ABSTRACT

Ocular position drifts during gaze fixation are significantly less well understood than microsaccades. We recently identified a short-latency ocular position drift response, of ∼1 min arc amplitude, that is triggered within <100 ms by visual onsets. This systematic eye movement response is feature-tuned and seems to be coordinated with a simultaneous resetting of the saccadic system by visual stimuli. However, much remains to be learned about the drift response, especially for designing better-informed neurophysiological experiments unraveling its mechanistic substrates. Here we systematically tested multiple new feature tuning properties of drift responses. Using highly precise eye tracking in three male rhesus macaque monkeys, we found that drift responses still occur for tiny foveal visual stimuli. Moreover, the responses exhibit size tuning, scaling their amplitude (both up and down) as a function of stimulus size, and they also possess a monotonically increasing contrast sensitivity curve. Importantly, short-latency drift responses still occur for small peripheral visual targets, which additionally introduce spatially directed modulations in drift trajectories toward the appearing peripheral stimuli. Drift responses also remain predominantly upward even for stimuli exclusively located in the lower visual field and even when starting gaze position is upward. When we checked the timing of drift responses, we found it was better synchronized to stimulus-induced saccadic inhibition than to stimulus onset. These results, along with a suppression of drift response amplitudes by peristimulus saccades, suggest that drift responses reflect the rapid impacts of short-latency and feature-tuned visual neural activity on final oculomotor control circuitry in the brain.


Subject(s)
Fixation, Ocular , Vision, Ocular , Animals , Male , Macaca mulatta , Eye Movements , Saccades , Visual Perception/physiology
18.
J Neurosci ; 44(12)2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38316562

ABSTRACT

With every saccadic eye movement, humans bring new information into their fovea to be processed with high visual acuity. Notably, perception is enhanced already before a relevant item is foveated: During saccade preparation, presaccadic attention shifts to the upcoming fixation location, which can be measured via behavioral correlates such as enhanced visual performance or modulations of sensory feature tuning. The coupling between saccadic eye movements and attention is assumed to be robust and mandatory and considered a mechanism facilitating the integration of pre- and postsaccadic information. However, until recently it had not been investigated as a function of saccade direction. Here, we measured contrast response functions during fixation and saccade preparation in male and female observers and found that the pronounced response gain benefit typically elicited by presaccadic attention is selectively lacking before upward saccades at the group level-some observers even showed a cost. Individual observer's sensitivity before upward saccades was negatively related to their amount of surface area in primary visual cortex representing the saccade target, suggesting a potential compensatory mechanism that optimizes the use of the limited neural resources processing the upper vertical meridian. Our results raise the question of how perceptual continuity is achieved and how upward saccades can be accurately targeted despite the lack of-theoretically required-presaccadic attention.


Subject(s)
Eye Movements , Saccades , Male , Female , Humans , Attention/physiology , Fovea Centralis , Visual Perception/physiology , Photic Stimulation
19.
J Neurosci ; 44(8)2024 Feb 21.
Article in English | MEDLINE | ID: mdl-38191569

ABSTRACT

Identifying neural correlates of conscious perception is a fundamental endeavor of cognitive neuroscience. Most studies so far have focused on visual awareness along with trial-by-trial reports of task-relevant stimuli, which can confound neural measures of perceptual awareness with postperceptual processing. Here, we used a three-phase sine-wave speech paradigm that dissociated between conscious speech perception and task relevance while recording EEG in humans of both sexes. Compared with tokens perceived as noise, physically identical sine-wave speech tokens that were perceived as speech elicited a left-lateralized, near-vertex negativity, which we interpret as a phonological version of a perceptual awareness negativity. This response appeared between 200 and 300 ms after token onset and was not present for frequency-flipped control tokens that were never perceived as speech. In contrast, the P3b elicited by task-irrelevant tokens did not significantly differ when the tokens were perceived as speech versus noise and was only enhanced for tokens that were both perceived as speech and relevant to the task. Our results extend the findings from previous studies on visual awareness and speech perception and suggest that correlates of conscious perception, across types of conscious content, are most likely to be found in midlatency negative-going brain responses in content-specific sensory areas.


Subject(s)
Awareness , Speech Perception , Male , Female , Humans , Awareness/physiology , Visual Perception/physiology , Electroencephalography/methods , Speech , Consciousness/physiology
20.
J Neurosci ; 44(10)2024 Mar 06.
Article in English | MEDLINE | ID: mdl-38199864

ABSTRACT

During communication in real-life settings, our brain often needs to integrate auditory and visual information and at the same time actively focus on the relevant sources of information, while ignoring interference from irrelevant events. The interaction between integration and attention processes remains poorly understood. Here, we use rapid invisible frequency tagging and magnetoencephalography to investigate how attention affects auditory and visual information processing and integration, during multimodal communication. We presented human participants (male and female) with videos of an actress uttering action verbs (auditory; tagged at 58 Hz) accompanied by two movie clips of hand gestures on both sides of fixation (attended stimulus tagged at 65 Hz; unattended stimulus tagged at 63 Hz). Integration difficulty was manipulated by a lower-order auditory factor (clear/degraded speech) and a higher-order visual semantic factor (matching/mismatching gesture). We observed an enhanced neural response to the attended visual information during degraded speech compared to clear speech. For the unattended information, the neural response to mismatching gestures was enhanced compared to matching gestures. Furthermore, signal power at the intermodulation frequencies of the frequency tags, indexing nonlinear signal interactions, was enhanced in the left frontotemporal and frontal regions. Focusing on the left inferior frontal gyrus, this enhancement was specific for the attended information, for those trials that benefitted from integration with a matching gesture. Together, our results suggest that attention modulates audiovisual processing and interaction, depending on the congruence and quality of the sensory input.


Subject(s)
Brain , Speech Perception , Humans , Male , Female , Brain/physiology , Visual Perception/physiology , Magnetoencephalography , Speech/physiology , Attention/physiology , Speech Perception/physiology , Acoustic Stimulation , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL