RESUMO
The coordinate frames for color and motion are often defined by three dimensions (e.g., responses from the three types of human cone photoreceptors for color and the three dimensions of space for motion). Does this common dimensionality lead to similar perceptual representations? Here we show that the organizational principles for the representation of hue and motion direction are instead profoundly different. We compared observers' judgments of hue and motion direction using functionally equivalent stimulus metrics, behavioral tasks, and computational analyses, and used the pattern of individual differences to decode the underlying representational structure for these features. Hue judgments were assessed using a standard "hue-scaling" task (i.e., judging the proportion of red/green and blue/yellow in each hue). Motion judgments were measured using a "motion-scaling" task (i.e., judging the proportion of left/right and up/down motion in moving dots). Analyses of the interobserver variability in hue scaling revealed multiple independent factors limited to different local regions of color space. This is inconsistent with the influences across a broad range of hues predicted by conventional color-opponent models. In contrast, variations in motion scaling were characterized by more global factors plausibly related to variation in the relative weightings of the cardinal spatial axes. These results suggest that although the coordinate frames for specifying color and motion share a common dimensional structure, the perceptual coding principles for hue and motion direction are distinct. These differences might reflect a distinction between the computational strategies required for the visual analysis of spatial vs. nonspatial attributes of the world.
Assuntos
Percepção de Cores , Individualidade , Humanos , Percepção de Cores/fisiologia , Células Fotorreceptoras Retinianas Cones/fisiologia , Benchmarking , Peso Corporal , Cor , Estimulação Luminosa/métodosRESUMO
The human brain exhibits both oscillatory and aperiodic, or 1/f, activity. Although a large body of research has focused on the relationship between brain rhythms and sensory processes, aperiodic activity has often been overlooked as functionally irrelevant. Prompted by recent findings linking aperiodic activity to the balance between neural excitation and inhibition, we investigated its effects on the temporal resolution of perception. We recorded electroencephalography (EEG) from participants (both sexes) during the resting state and a task in which they detected the presence of two flashes separated by variable interstimulus intervals. Two-flash discrimination accuracy typically follows a sigmoid function whose steepness reflects perceptual variability or inconsistent integration/segregation of the stimuli. We found that individual differences in the steepness of the psychometric function correlated with EEG aperiodic exponents over posterior scalp sites. In other words, participants with flatter EEG spectra (i.e., greater neural excitation) exhibited increased sensory noise, resulting in shallower psychometric curves. Our finding suggests that aperiodic EEG is linked to sensory integration processes usually attributed to the rhythmic inhibition of neural oscillations. Overall, this correspondence between aperiodic neural excitation and behavioral measures of sensory noise provides a more comprehensive explanation of the relationship between brain activity and sensory integration and represents an important extension to theories of how the brain samples sensory input over time.
Assuntos
Eletroencefalografia , Estimulação Luminosa , Percepção Visual , Humanos , Masculino , Feminino , Eletroencefalografia/métodos , Adulto , Adulto Jovem , Percepção Visual/fisiologia , Estimulação Luminosa/métodos , Encéfalo/fisiologiaRESUMO
Humans make decisions about food every day. The visual system provides important information that forms a basis for these food decisions. Although previous research has focused on visual object and category representations in the brain, it is still unclear how visually presented food is encoded by the brain. Here, we investigate the time-course of food representations in the brain. We used time-resolved multivariate analyses of electroencephalography (EEG) data, obtained from human participants (both sexes), to determine which food features are represented in the brain and whether focused attention is needed for this. We recorded EEG while participants engaged in two different tasks. In one task, the stimuli were task relevant, whereas in the other task, the stimuli were not task relevant. Our findings indicate that the brain can differentiate between food and nonfood items from â¼112â ms after the stimulus onset. The neural signal at later latencies contained information about food naturalness, how much the food was transformed, as well as the perceived caloric content. This information was present regardless of the task. Information about whether food is immediately ready to eat, however, was only present when the food was task relevant and presented at a slow presentation rate. Furthermore, the recorded brain activity correlated with the behavioral responses in an odd-item-out task. The fast representation of these food features, along with the finding that this information is used to guide food categorization decision-making, suggests that these features are important dimensions along which the representation of foods is organized.
Assuntos
Encéfalo , Eletroencefalografia , Alimentos , Estimulação Luminosa , Humanos , Masculino , Feminino , Encéfalo/fisiologia , Adulto , Eletroencefalografia/métodos , Adulto Jovem , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Fatores de Tempo , Atenção/fisiologia , Tomada de Decisões/fisiologiaRESUMO
The developed human brain shows remarkable plasticity following perceptual learning, resulting in improved visual sensitivity. However, such improvements commonly require extensive stimuli exposure. Here we show that efficiently enhancing visual perception with minimal stimuli exposure recruits distinct neural mechanisms relative to standard repetition-based learning. Participants (n = 20, 12 women, 8 men) encoded a visual discrimination task, followed by brief memory reactivations of only five trials each performed on separate days, demonstrating improvements comparable with standard repetition-based learning (n = 20, 12 women, 8 men). Reactivation-induced learning engaged increased bilateral intraparietal sulcus (IPS) activity relative to repetition-based learning. Complementary evidence for differential learning processes was further provided by temporal-parietal resting functional connectivity changes, which correlated with behavioral improvements. The results suggest that efficiently enhancing visual perception with minimal stimuli exposure recruits distinct neural processes, engaging higher-order control and attentional resources while leading to similar perceptual gains. These unique brain mechanisms underlying improved perceptual learning efficiency may have important implications for daily life and in clinical conditions requiring relearning following brain damage.
Assuntos
Plasticidade Neuronal , Percepção Visual , Humanos , Feminino , Masculino , Plasticidade Neuronal/fisiologia , Percepção Visual/fisiologia , Adulto , Adulto Jovem , Imageamento por Ressonância Magnética , Estimulação Luminosa/métodos , Aprendizagem/fisiologia , Mapeamento Encefálico , Lobo Parietal/fisiologiaRESUMO
Understanding social interaction requires processing social agents and their relationships. The latest results show that much of this process is visually solved: visual areas can represent multiple people encoding emergent information about their interaction that is not explained by the response to the individuals alone. A neural signature of this process is an increased response in visual areas, to face-to-face (seemingly interacting) people, relative to people presented as unrelated (back-to-back). This effect highlighted a network of visual areas for representing relational information. How is this network organized? Using functional MRI, we measured the brain activity of healthy female and male humans (N = 42), in response to images of two faces or two (head-blurred) bodies, facing toward or away from each other. Taking the facing > non-facing effect as a signature of relation perception, we found that relations between faces and between bodies were coded in distinct areas, mirroring the categorical representation of faces and bodies in the visual cortex. Additional analyses suggest the existence of a third network encoding relations between (nonsocial) objects. Finally, a separate occipitotemporal network showed the generalization of relational information across body, face, and nonsocial object dyads (multivariate pattern classification analysis), revealing shared properties of relations across categories. In sum, beyond single entities, the visual cortex encodes the relations that bind multiple entities into relationships; it does so in a category-selective fashion, thus respecting a general organizing principle of representation in high-level vision. Visual areas encoding visual relational information can reveal the processing of emergent properties of social (and nonsocial) interaction, which trigger inferential processes.
Assuntos
Córtex Visual , Humanos , Masculino , Feminino , Estimulação Luminosa/métodos , Córtex Visual/fisiologia , Imageamento por Ressonância Magnética/métodos , Corpo Humano , Reconhecimento Visual de Modelos/fisiologia , Mapeamento Encefálico/métodos , Percepção Visual/fisiologiaRESUMO
Over the past two decades, neurophysiological responses in the lateral intraparietal area (LIP) have received extensive study for insight into decision making. In a parallel manner, inferred cognitive processes have enriched interpretations of LIP activity. Because of this bidirectional interplay between physiology and cognition, LIP has served as fertile ground for developing quantitative models that link neural activity with decision making. These models stand as some of the most important frameworks for linking brain and mind, and they are now mature enough to be evaluated in finer detail and integrated with other lines of investigation of LIP function. Here, we focus on the relationship between LIP responses and known sensory and motor events in perceptual decision-making tasks, as assessed by correlative and causal methods. The resulting sensorimotor-focused approach offers an account of LIP activity as a multiplexed amalgam of sensory, cognitive, and motor-related activity, with a complex and often indirect relationship to decision processes. Our data-driven focus on multiplexing (and de-multiplexing) of various response components can complement decision-focused models and provides more detailed insight into how neural signals might relate to cognitive processes such as decision making.
Assuntos
Tomada de Decisões/fisiologia , Lateralidade Funcional/fisiologia , Lobo Parietal/fisiologia , Cognição/fisiologia , Humanos , Modelos Neurológicos , Percepção de Movimento/fisiologia , Tempo de Reação/fisiologia , Percepção Visual/fisiologiaRESUMO
Visual hallucinations are a common non-motor feature of Parkinson's disease and have been associated with accelerated cognitive decline, increased mortality and early institutionalisation. Despite their prevalence and negative impact on patient outcomes, the repertoire of treatments aimed at addressing this troubling symptom is limited. Over the last two decades, significant contributions have been made in uncovering the pathological and functional mechanisms of visual hallucinations, bringing us closer to the development of a comprehensive neurobiological framework. Convergent evidence now suggests that degeneration within the central cholinergic system may play a significant role in the genesis and progression of visual hallucinations. Here, we outline how cholinergic dysfunction may serve as a potential unifying neurobiological substrate underlying the multifactorial and dynamic nature of visual hallucinations. Drawing upon previous theoretical models, we explore the impact that alterations in cholinergic neurotransmission has on the core cognitive processes pertinent to abnormal perceptual experiences. We conclude by highlighting that a deeper understanding of cholinergic neurobiology and individual pathophysiology may help to improve established and emerging treatment strategies for the management of visual hallucinations and psychotic symptoms in Parkinson's disease.
RESUMO
Whether attention is a prerequisite of perceptual awareness or an independent and dissociable process remains a matter of debate. Importantly, understanding the relation between attention and awareness is probably not possible without taking into account the fact that both are heterogeneous and multifaceted mechanisms. Therefore, the present study tested the impact on visual awareness of two attentional mechanisms proposed by the Posner model: temporal alerting and spatio-temporal orienting. Specifically, we evaluated the effects of attention on the perceptual level, by measuring objective and subjective awareness of a threshold-level stimulus; and on the neural level, by investigating how attention affects two postulated event-related potential correlates of awareness. We found that alerting and orienting mechanisms additively facilitate perceptual consciousness, with activation of the latter resulting in the most vivid awareness. Furthermore, we found that late positivity is unlikely to constitute a neural correlate of consciousness as its amplitude was modulated by both attentional mechanisms, but early visual awareness negativity was independent of the alerting and orienting mechanisms. In conclusion, our study reveals a nuanced relationship between attention and awareness; moreover, by investigating the effect of the alerting mechanism, this study provides insights into the role of temporal attention in perceptual consciousness.
Assuntos
Atenção , Conscientização , Eletroencefalografia , Potenciais Evocados , Percepção Visual , Humanos , Atenção/fisiologia , Conscientização/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Percepção Visual/fisiologia , Potenciais Evocados/fisiologia , Estimulação Luminosa/métodos , Percepção Espacial/fisiologia , Estado de Consciência/fisiologia , Encéfalo/fisiologiaRESUMO
Past reward associations may be signaled from different sensory modalities; however, it remains unclear how different types of reward-associated stimuli modulate sensory perception. In this human fMRI study (female and male participants), a visual target was simultaneously presented with either an intra- (visual) or a cross-modal (auditory) cue that was previously associated with rewards. We hypothesized that, depending on the sensory modality of the cues, distinct neural mechanisms underlie the value-driven modulation of visual processing. Using a multivariate approach, we confirmed that reward-associated cues enhanced the target representation in early visual areas and identified the brain valuation regions. Then, using an effective connectivity analysis, we tested three possible patterns of connectivity that could underlie the modulation of the visual cortex: a direct pathway from the frontal valuation areas to the visual areas, a mediated pathway through the attention-related areas, and a mediated pathway that additionally involved sensory association areas. We found evidence for the third model demonstrating that the reward-related information in both sensory modalities is communicated across the valuation and attention-related brain regions. Additionally, the superior temporal areas were recruited when reward was cued cross-modally. The strongest dissociation between the intra- and cross-modal reward-driven effects was observed at the level of the feedforward and feedback connections of the visual cortex estimated from the winning model. These results suggest that, in the presence of previously rewarded stimuli from different sensory modalities, a combination of domain-general and domain-specific mechanisms are recruited across the brain to adjust the visual perception.SIGNIFICANCE STATEMENT Reward has a profound effect on perception, but it is not known whether shared or disparate mechanisms underlie the reward-driven effects across sensory modalities. In this human fMRI study, we examined the reward-driven modulation of the visual cortex by visual (intra-modal) and auditory (cross-modal) reward-associated cues. Using a model-based approach to identify the most plausible pattern of inter-regional effective connectivity, we found that higher-order areas involved in the valuation and attentional processing were recruited by both types of rewards. However, the pattern of connectivity between these areas and the early visual cortex was distinct between the intra- and cross-modal rewards. This evidence suggests that, to effectively adapt to the environment, reward signals may recruit both domain-general and domain-specific mechanisms.
Assuntos
Córtex Visual , Percepção Visual , Humanos , Masculino , Feminino , Atenção , Encéfalo , Visão Ocular , Percepção Auditiva , Estimulação Luminosa/métodos , Estimulação Acústica/métodosRESUMO
A prominent theoretical framework spanning philosophy, psychology, and neuroscience holds that selective attention penetrates early stages of perceptual processing to alter the subjective visual experience of behaviorally relevant stimuli. For example, searching for a red apple at the grocery store might make the relevant color appear brighter and more saturated compared with seeing the exact same red apple while searching for a yellow banana. In contrast, recent proposals argue that data supporting attention-related changes in appearance reflect decision- and motor-level response biases without concurrent changes in perceptual experience. Here, we tested these accounts by evaluating attentional modulations of EEG responses recorded from male and female human subjects while they compared the perceived contrast of attended and unattended visual stimuli rendered at different levels of physical contrast. We found that attention enhanced the amplitude of the P1 component, an early evoked potential measured over visual cortex. A linking model based on signal detection theory suggests that response gain modulations of the P1 component track attention-induced changes in perceived contrast as measured with behavior. In contrast, attentional cues induced changes in the baseline amplitude of posterior alpha band oscillations (â¼9-12 Hz), an effect that best accounts for cue-induced response biases, particularly when no stimuli are presented or when competing stimuli are similar and decisional uncertainty is high. The observation of dissociable neural markers that are linked to changes in subjective appearance and response bias supports a more unified theoretical account and demonstrates an approach to isolate subjective aspects of selective information processing.SIGNIFICANCE STATEMENT Does attention alter visual appearance, or does it simply induce response bias? In the present study, we examined these competing accounts using EEG and linking models based on signal detection theory. We found that response gain modulations of the visually evoked P1 component best accounted for attention-induced changes in visual appearance. In contrast, cue-induced baseline shifts in alpha band activity better explained response biases. Together, these results suggest that attention concurrently impacts visual appearance and response bias, and that these processes can be experimentally isolated.
Assuntos
Potenciais Evocados , Córtex Visual , Humanos , Masculino , Feminino , Incerteza , Cognição , Sinais (Psicologia) , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Estimulação Luminosa/métodos , EletroencefalografiaRESUMO
Does our perception of an object change once we discover what function it serves? We showed human participants (n = 48, 31 females and 17 males) pictures of unfamiliar objects either together with keywords matching their function, leading to semantically informed perception, or together with nonmatching keywords, resulting in uninformed perception. We measured event-related potentials to investigate at which stages in the visual processing hierarchy these two types of object perception differed from one another. We found that semantically informed compared with uninformed perception was associated with larger amplitudes in the N170 component (150-200 ms), reduced amplitudes in the N400 component (400-700 ms), and a late decrease in alpha/beta band power. When the same objects were presented once more without any information, the N400 and event-related power effects persisted, and we also observed enlarged amplitudes in the P1 component (100-150 ms) in response to objects for which semantically informed perception had taken place. Consistent with previous work, this suggests that obtaining semantic information about previously unfamiliar objects alters aspects of their lower-level visual perception (P1 component), higher-level visual perception (N170 component), and semantic processing (N400 component, event-related power). Our study is the first to show that such effects occur instantly after semantic information has been provided for the first time, without requiring extensive learning.SIGNIFICANCE STATEMENT There has been a long-standing debate about whether or not higher-level cognitive capacities, such as semantic knowledge, can influence lower-level perceptual processing in a top-down fashion. Here we could show, for the first time, that information about the function of previously unfamiliar objects immediately influences cortical processing within less than 200 ms. Of note, this influence does not require training or experience with the objects and related semantic information. Therefore, our study is the first to show effects of cognition on perception while ruling out the possibility that prior knowledge merely acts by preactivating or altering stored visual representations. Instead, this knowledge seems to alter perception online, thus providing a compelling case against the impenetrability of perception by cognition.
Assuntos
Potenciais Evocados , Semântica , Humanos , Masculino , Feminino , Potenciais Evocados/fisiologia , Eletroencefalografia/métodos , Percepção Visual/fisiologia , Aprendizagem/fisiologiaRESUMO
Placebo and nocebo effects modulate symptom perception through expectations and learning processes in various domains. Predominantly, their impact has been investigated on pain and physical performance. However, the influence of placebos and nocebos on visual system functionality has yet to be explored. The present study aimed to test whether placebo and nocebo effects can intervene in altering participants' performance outcomes during a novel visual accuracy task and to examine the underlying neural mechanisms through EEG. After performing a baseline session, visual accuracy was said to be enhanced or disrupted by a sham transcranial electrical stimulation over the occipital lobe. Behavioural results showed a significant increase in visual accuracy for the placebo group, from the baseline session to the test session, whereas the nocebo group showed a decrease in visual accuracy. EEG analyses on the event-related potential P300 component, conducted on both a centro-parietal electrode patch and a parieto-occipital, one displayed an increase in the amplitude of P300 for the placebo group, and a decrease in the nocebo group. These findings suggest for the first time that placebo and nocebo effects can influence visual perception and attentional processes linked to it. Overall, the present study contributes to understanding how expectations affect sensory perception beyond pain and the motor system, paving the way for investigating these phenomena in other sensory modalities such as auditory or olfactory perception. KEY POINTS: Placebo and nocebo effects have been studied predominantly in pain and motor performance fields. In a novel visual task, the impact of placebo and nocebo effects on the visual system has been evaluated, in both early components (stimuli-related) and late components (attention-related). The placebo group showed an increase in visual accuracy and EEG-evoked potential amplitudes, whereas the nocebo group showed a decrease in both. This study shows how expectations and the related placebo and nocebo effects can shape basic stimuli sensory perception in the visual domain.
RESUMO
Tinnitus is the perception of a continuous sound in the absence of an external source. Although the role of the auditory system is well investigated, there is a gap in how multisensory signals are integrated to produce a single percept in tinnitus. Here, we train participants to learn a new sensory environment by associating a cue with a target signal that varies in perceptual threshold. In the test phase, we present only the cue to see whether the person perceives an illusion of the target signal. We perform two separate experiments to observe the behavioral and electrophysiological responses to the learning and test phases in 1) healthy young adults and 2) people with continuous subjective tinnitus and matched control subjects. We observed that in both parts of the study the percentage of false alarms was negatively correlated with the 75% detection threshold. Additionally, the perception of an illusion goes together with increased evoked response potential in frontal regions of the brain. Furthermore, in patients with tinnitus, we observe no significant difference in behavioral or evoked response in the auditory paradigm, whereas patients with tinnitus were more likely to report false alarms along with increased evoked activity during the learning and test phases in the visual paradigm. This emphasizes the importance of integrity of sensory pathways in multisensory integration and how this process may be disrupted in people with tinnitus. Furthermore, the present study also presents preliminary data supporting evidence that tinnitus patients may be building stronger perceptual models, which needs future studies with a larger population to provide concrete evidence on.NEW & NOTEWORTHY Tinnitus is the continuous phantom perception of a ringing in the ears. Recently, it has been suggested that tinnitus may be a maladaptive inference of the brain to auditory anomalies, whether they are detected or undetected by an audiogram. The present study presents empirical evidence for this hypothesis by inducing an illusion in a sensory domain that is damaged (auditory) and one that is intact (visual). It also presents novel information about how people with tinnitus process multisensory stimuli in the audio-visual domain.
Assuntos
Percepção Auditiva , Teorema de Bayes , Ilusões , Zumbido , Humanos , Zumbido/fisiopatologia , Projetos Piloto , Masculino , Feminino , Adulto , Percepção Auditiva/fisiologia , Ilusões/fisiologia , Percepção Visual/fisiologia , Adulto Jovem , Eletroencefalografia , Estimulação Acústica , Sinais (Psicologia)RESUMO
Stochastic resonance (SR) is the phenomenon wherein the introduction of a suitable level of noise enhances the detection of subthreshold signals in non linear systems. It manifests across various physical and biological systems, including the human brain. Psychophysical experiments have confirmed the behavioural impact of stochastic resonance on auditory, somatic, and visual perception. Aging renders the brain more susceptible to noise, possibly causing differences in the SR phenomenon between young and elderly individuals. This study investigates the impact of noise on motion detection accuracy throughout the lifespan, with 214 participants ranging in age from 18 to 82. Our objective was to determine the optimal noise level to induce an SR-like response in both young and old populations. Consistent with existing literature, our findings reveal a diminishing advantage with age, indicating that the efficacy of noise addition progressively diminishes. Additionally, as individuals age, peak performance is achieved with lower levels of noise. This study provides the first insight into how SR changes across the lifespan of healthy adults and establishes a foundation for understanding the pathological alterations in perceptual processes associated with aging.
Assuntos
Envelhecimento , Processos Estocásticos , Humanos , Adulto , Idoso , Pessoa de Meia-Idade , Masculino , Adulto Jovem , Feminino , Adolescente , Envelhecimento/fisiologia , Idoso de 80 Anos ou mais , Percepção de Movimento/fisiologia , RuídoRESUMO
Currently, artificial neural networks (ANNs) based on memristors are limited to recognizing static images of objects when simulating human visual system, preventing them from performing high-dimensional information perception, and achieving more complex biomimetic functions is subject to certain limitations. In this work, indium gallium zinc oxide (IGZO)/tungsten oxide (WO3-x)-heterostructured artificial optoelectronic synaptic devices mimicking image segmentation and motion capture exhibiting high-performance optoelectronic synaptic responses are proposed and demonstrated. Upon electrical and optical stimulations, the device shows a variety of fundamental and advanced electrical and optical synaptic plasticity. Most importantly, outstanding and repeatable linear synaptic weight changes are attained by the developed memristor. By taking advantage of the notable linear synaptic weight changes, ANNs have been constructed and successfully utilized to demonstrate two applications in the field of computer vision, including image segmentation and object tracking. The accuracy attained by the memristor-based ANNs is similar to that of the computer algorithms, while its power has been significantly reduced by 105 orders of magnitude. With successful emulations of the human brain reactions when observing objects, the demonstrated memristor and related ANNs can be effectively utilized in constructing artificial optoelectronic synaptic devices and show promising potential in emulating human visual perception.
RESUMO
Serial dependence is a recently described phenomenon by which the perceptual evaluation of a stimulus is biased by a previously attended one. By integrating stimuli over time, serial dependence is believed to ensure a stable conscious experience. Despite increasing studies in humans, it is unknown if the process occurs also in other species. Here, we assessed whether serial dependence occurs in dogs. To this aim, dogs were trained on a quantity discrimination task before being presented with a discrimination where one of the discriminanda was preceded by a task-irrelevant stimulus. If dogs are susceptible to serial dependence, the task-irrelevant stimulus was hypothesized to influence the perception of the subsequently presented quantity. Our results revealed that dogs perceived the currently presented quantity to be closer to the one presented briefly before, in accordance with serial dependence. The direction and strength of the effect were comparable to those observed in humans. Data regarding dogs' attention during the task suggest that dogs used two different quantity estimation mechanisms, an indication of a higher cognitive mechanism involved in the process. The present results are the first empirical evidence that serial dependence extends beyond humans, suggesting that the mechanism is shared by phylogenetically distant mammals.
Assuntos
Atenção , Percepção Visual , Animais , Cães/fisiologia , Masculino , Feminino , Estimulação Luminosa , Discriminação PsicológicaRESUMO
Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.
Assuntos
Percepção Auditiva , Percepção Visual , Humanos , Percepção Auditiva/fisiologia , Adulto Jovem , Adulto , Masculino , Feminino , Percepção Visual/fisiologia , Ruído , Estimulação AcústicaRESUMO
One important role of the TPJ is the contribution to perception of the global gist in hierarchically organized stimuli where individual elements create a global visual percept. However, the link between clinical findings in simultanagnosia and neuroimaging in healthy subjects is missing for real-world global stimuli, like visual scenes. It is well-known that hierarchical, global stimuli activate TPJ regions and that simultanagnosia patients show deficits during the recognition of hierarchical stimuli and real-world visual scenes. However, the role of the TPJ in real-world scene processing is entirely unexplored. In the present study, we first localized TPJ regions significantly responding to the global gist of hierarchical stimuli and then investigated the responses to visual scenes, as well as single objects and faces as control stimuli. All three stimulus classes evoked significantly positive univariate responses in the previously localized TPJ regions. In a multivariate analysis, we were able to demonstrate that voxel patterns of the TPJ were classified significantly above chance level for all three stimulus classes. These results demonstrate a significant involvement of the TPJ in processing of complex visual stimuli that is not restricted to visual scenes and that the TPJ is sensitive to different classes of visual stimuli with a specific signature of neuronal activations.
Assuntos
Imageamento por Ressonância Magnética , Lobo Parietal , Humanos , Lobo Parietal/fisiologia , Reconhecimento Psicológico , Neuroimagem , Análise Multivariada , Estimulação Luminosa , Reconhecimento Visual de Modelos/fisiologia , Percepção Visual/fisiologia , Mapeamento Encefálico/métodosRESUMO
The question of whether spatial attention can modulate initial afferent activity in area V1, as measured by the earliest visual event-related potential (ERP) component "C1", is still the subject of debate. Because attention always enhances behavioral performance, previous research has focused on finding evidence of attention-related enhancements in visual neural responses. However, recent psychophysical studies revealed a complex picture of attention's influence on visual perception: attention amplifies the perceived contrast of low-contrast stimuli while dampening the perceived contrast of high-contrast stimuli. This evidence suggests that attention may not invariably augment visual neural responses but could instead exert inhibitory effects under certain circumstances. Whether this bi-directional modulation of attention also manifests in C1 and whether the modulation of C1 underpins the attentional influence on contrast perception remain unknown. To address these questions, we conducted two experiments (N = 67 in total) by employing a combination of behavioral and ERP methodologies. Our results did not unveil a uniform attentional enhancement or attenuation effect of C1 across all subjects. However, an intriguing correlation between the attentional effects of C1 and contrast appearance for high-contrast stimuli did emerge, revealing an association between attentional modulation of C1 and the attentional modulation of contrast appearance. This finding offers new insights into the relationship between attention, perceptual experience, and early visual neural processing, suggesting that the attentional effect on subjective visual perception could be mediated by the attentional modulation of the earliest visual cortical response.
Assuntos
Eletroencefalografia , Córtex Visual , Humanos , Potenciais Evocados Visuais , Córtex Visual/fisiologia , Mapeamento Encefálico/métodos , Estimulação Luminosa/métodos , Percepção Visual/fisiologia , Potenciais Evocados , Atenção/fisiologiaRESUMO
Several studies suggest that breathing entrains neural oscillations and thereby improves visual detection and memory performance during nasal inhalation. However, the evidence for this association is mixed, with some studies finding no, minor, or opposite effects. Here, we tested whether nasal breathing phase influences memory of repeated images presented in a rapid serial visual presentation (RSVP) task. The RSVP task is ideal for studying the effects of respiratory-entrained oscillations on visual memory because it engages critical aspects of sensory encoding that depend on oscillatory activity, such as fast processing of natural images, repetition detection, memory encoding, and retrieval. It also enables the presentation of a large number of stimuli during each phase of the breathing cycle. In two separate experiments (n = 72 and n = 142, respectively) where participants were explicitly asked to breathe through their nose, we found that nasal breathing phase at target presentation did not significantly affect memory performance. An exploratory analysis in the first experiment suggested a potential benefit for targets appearing approximately 1 s after inhalation. However, this finding was not replicated in the pre-registered second experiment with a larger sample. Thus, in two large sample experiments, we found no measurable impact of breathing phase on memory performance in the RSVP task. These results suggest that the natural breathing cycle does not have a significant impact on memory for repeated images and raise doubts about the idea that visual memory is broadly affected by the breathing phase.