Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 273
Filtrar
1.
Proc Natl Acad Sci U S A ; 120(40): e2211179120, 2023 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-37769256

RESUMO

In modeling vision, there has been a remarkable progress in recognizing a range of scene components, but the problem of analyzing full scenes, an ultimate goal of visual perception, is still largely open. To deal with complete scenes, recent work focused on the training of models for extracting the full graph-like structure of a scene. In contrast with scene graphs, humans' scene perception focuses on selected structures in the scene, starting with a limited interpretation and evolving sequentially in a goal-directed manner [G. L. Malcolm, I. I. A. Groen, C. I. Baker, Trends. Cogn. Sci. 20, 843-856 (2016)]. Guidance is crucial throughout scene interpretation since the extraction of full scene representation is often infeasible. Here, we present a model that performs human-like guided scene interpretation, using an iterative bottom-up, top-down processing, in a "counterstream" structure motivated by cortical circuitry. The process proceeds by the sequential application of top-down instructions that guide the interpretation process. The results show how scene structures of interest to the viewer are extracted by an automatically selected sequence of top-down instructions. The model shows two further benefits. One is an inherent capability to deal well with the problem of combinatorial generalization-generalizing broadly to unseen scene configurations, which is limited in current network models [B. Lake, M. Baroni, 35th International Conference on Machine Learning, ICML 2018 (2018)]. The second is the ability to combine visual with nonvisual information at each cycle of the interpretation process, which is a key aspect for modeling human perception as well as advancing AI vision systems.


Assuntos
Motivação , Percepção Visual , Humanos , Estimulação Luminosa/métodos , Reconhecimento Visual de Modelos
2.
J Neurosci ; 44(27)2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38777600

RESUMO

Scene memory is prone to systematic distortions potentially arising from experience with the external world. Boundary transformation, a well-known memory distortion effect along the near-far axis of the three-dimensional space, represents the observer's erroneous recall of scenes' viewing distance. Researchers argued that normalization to the prototypical viewpoint with the high-probability viewing distance influenced this phenomenon. Herein, we hypothesized that the prototypical viewpoint also exists in the vertical angle of view (AOV) dimension and could cause memory distortion along scenes' vertical axis. Human subjects of both sexes were recruited to test this hypothesis, and two behavioral experiments were conducted, revealing a systematic memory distortion in the vertical AOV in both the forced choice (n = 79) and free adjustment (n = 30) tasks. Furthermore, the regression analysis implied that the complexity information asymmetry in scenes' vertical axis and the independent subjective AOV ratings from a large set of online participants (n = 1,208) could jointly predict AOV biases. Furthermore, in a functional magnetic resonance imaging experiment (n = 24), we demonstrated the involvement of areas in the ventral visual pathway (V3/V4, PPA, and OPA) in AOV bias judgment. Additionally, in a magnetoencephalography experiment (n = 20), we could significantly decode the subjects' AOV bias judgments ∼140 ms after scene onset and the low-level visual complexity information around the similar temporal interval. These findings suggest that AOV bias is driven by the normalization process and associated with the neural activities in the early stage of scene processing.


Assuntos
Imageamento por Ressonância Magnética , Humanos , Masculino , Feminino , Adulto , Adulto Jovem , Estimulação Luminosa/métodos , Magnetoencefalografia , Memória/fisiologia , Percepção Visual/fisiologia , Mapeamento Encefálico , Percepção Espacial/fisiologia , Vias Visuais/fisiologia , Vias Visuais/diagnóstico por imagem
3.
J Neurosci ; 43(36): 6320-6329, 2023 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-37580121

RESUMO

Recent neural evidence suggests that the human brain contains dissociable systems for "scene categorization" (i.e., recognizing a place as a particular kind of place, for example, a kitchen), including the parahippocampal place area, and "visually guided navigation" (e.g., finding our way through a kitchen, not running into the kitchen walls or banging into the kitchen table), including the occipital place area. However, converging behavioral data - for instance, whether scene categorization and visually guided navigation abilities develop along different timelines and whether there is differential breakdown under neurologic deficit - would provide even stronger support for this two-scene-systems hypothesis. Thus, here we tested scene categorization and visually guided navigation abilities in 131 typically developing children between 4 and 9 years of age, as well as 46 adults with Williams syndrome, a developmental disorder with known impairment on "action" tasks, yet relative sparing on "perception" tasks, in object processing. We found that (1) visually guided navigation is later to develop than scene categorization, and (2) Williams syndrome adults are impaired in visually guided navigation, but not scene categorization, relative to mental age-matched children. Together, these findings provide the first developmental and neuropsychological evidence for dissociable cognitive systems for recognizing places and navigating through them.SIGNIFICANCE STATEMENT Two decades ago, Milner and Goodale showed us that identifying objects and manipulating them involve distinct cognitive and neural systems. Recent neural evidence suggests that the same may be true of our interactions with our environment: identifying places and navigating through them are dissociable systems. Here we provide converging behavioral evidence supporting this two-scene-systems hypothesis - finding both differential development and breakdown of "scene categorization" and "visually guided navigation." This finding suggests that the division of labor between perception and action systems is a general organizing principle for the visual system, not just a principle of the object processing system in particular.


Assuntos
Síndrome de Williams , Adulto , Criança , Humanos , Mapeamento Encefálico , Reconhecimento Visual de Modelos , Imageamento por Ressonância Magnética , Cognição , Estimulação Luminosa
4.
J Neurosci ; 43(31): 5723-5737, 2023 08 02.
Artigo em Inglês | MEDLINE | ID: mdl-37474310

RESUMO

To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex.SIGNIFICANCE STATEMENT As we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.


Assuntos
Córtex Cerebral , Córtex Visual , Masculino , Feminino , Humanos , Encéfalo , Córtex Visual/diagnóstico por imagem , Memória , Imageamento por Ressonância Magnética/métodos , Mapeamento Encefálico/métodos , Percepção , Percepção Visual
5.
Eur J Neurosci ; 59(9): 2353-2372, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38403361

RESUMO

Real-world (rw-) statistical regularities, or expectations about the visual world learned over a lifetime, have been found to be associated with scene perception efficiency. For example, good (i.e., highly representative) exemplars of basic scene categories, one example of an rw-statistical regularity, are detected more readily than bad exemplars of the category. Similarly, good exemplars achieve higher multivariate pattern analysis (MVPA) classification accuracy than bad exemplars in scene-responsive regions of interest, particularly in the parahippocampal place area (PPA). However, it is unclear whether the good exemplar advantages observed depend on or are even confounded by selective attention. Here, we ask whether the observed neural advantage of the good scene exemplars requires full attention. We used a dual-task paradigm to manipulate attention and exemplar representativeness while recording neural responses with functional magnetic resonance imaging (fMRI). Both univariate analysis and MVPA were adopted to examine the effect of representativeness. In the attend-to-scenes condition, our results replicated an earlier study showing that good exemplars evoke less activity but a clearer category representation than bad exemplars. Importantly, similar advantages of the good exemplars were also observed when participants were distracted by a serial visual search task demanding a high attention load. In addition, cross-decoding between attended and distracted representations revealed that attention resulted in a quantitative (increased activation) rather than qualitative (altered activity patterns) improvement of the category representation, particularly for good exemplars. We, therefore, conclude that the effect of category representativeness on neural representations does not require full attention.


Assuntos
Atenção , Imageamento por Ressonância Magnética , Humanos , Atenção/fisiologia , Masculino , Feminino , Adulto , Imageamento por Ressonância Magnética/métodos , Adulto Jovem , Reconhecimento Visual de Modelos/fisiologia , Percepção Visual/fisiologia , Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem
6.
Hum Brain Mapp ; 45(3): e26628, 2024 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-38376190

RESUMO

The recognition and perception of places has been linked to a network of scene-selective regions in the human brain. While previous studies have focussed on functional connectivity between scene-selective regions themselves, less is known about their connectivity with other cortical and subcortical regions in the brain. Here, we determine the functional and structural connectivity profile of the scene network. We used fMRI to examine functional connectivity between scene regions and across the whole brain during rest and movie-watching. Connectivity within the scene network revealed a bias between posterior and anterior scene regions implicated in perceptual and mnemonic aspects of scene perception respectively. Differences between posterior and anterior scene regions were also evident in the connectivity with cortical and subcortical regions across the brain. For example, the Occipital Place Area (OPA) and posterior Parahippocampal Place Area (PPA) showed greater connectivity with visual and dorsal attention networks, while anterior PPA and Retrosplenial Complex showed preferential connectivity with default mode and frontoparietal control networks and the hippocampus. We further measured the structural connectivity of the scene network using diffusion tractography. This indicated both similarities and differences with the functional connectivity, highlighting biases between posterior and anterior regions, but also between ventral and dorsal scene regions. Finally, we quantified the structural connectivity between the scene network and major white matter tracts throughout the brain. These findings provide a map of the functional and structural connectivity of scene-selective regions to each other and the rest of the brain.


Assuntos
Mapeamento Encefálico , Neocórtex , Humanos , Imageamento por Ressonância Magnética , Imagem de Tensor de Difusão , Memória
7.
Psychol Sci ; 35(6): 681-693, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38683657

RESUMO

As a powerful social signal, a body, face, or gaze facing toward oneself holds an individual's attention. We asked whether, going beyond an egocentric stance, facingness between others has a similar effect and why. In a preferential-looking time paradigm, human adults showed spontaneous preference to look at two bodies facing toward (vs. away from) each other (Experiment 1a, N = 24). Moreover, facing dyads were rated higher on social semantic dimensions, showing that facingness adds social value to stimuli (Experiment 1b, N = 138). The same visual preference was found in juvenile macaque monkeys (Experiment 2, N = 21). Finally, on the human development timescale, this preference emerged by 5 years, although young infants by 7 months of age already discriminate visual scenes on the basis of body positioning (Experiment 3, N = 120). We discuss how the preference for facing dyads-shared by human adults, young children, and macaques-can signal a new milestone in social cognition development, supporting processing and learning from third-party social interactions.


Assuntos
Percepção Visual , Humanos , Animais , Masculino , Feminino , Adulto , Lactente , Percepção Visual/fisiologia , Adulto Jovem , Percepção Social , Atenção/fisiologia , Pré-Escolar , Cognição Social , Percepção Espacial/fisiologia , Interação Social
8.
Cereb Cortex ; 33(16): 9524-9531, 2023 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-37365829

RESUMO

Real-world scenes consist of objects, defined by local information, and scene background, defined by global information. Although objects and scenes are processed in separate pathways in visual cortex, their processing interacts. Specifically, previous studies have shown that scene context makes blurry objects look sharper, an effect that can be observed as a sharpening of object representations in visual cortex from around 300 ms after stimulus onset. Here, we use MEG to show that objects can also sharpen scene representations, with the same temporal profile. Photographs of indoor (closed) and outdoor (open) scenes were blurred such that they were difficult to categorize on their own but easily disambiguated by the inclusion of an object. Classifiers were trained to distinguish MEG response patterns to intact indoor and outdoor scenes, presented in an independent run, and tested on degraded scenes in the main experiment. Results revealed better decoding of scenes with objects than scenes alone and objects alone from 300 ms after stimulus onset. This effect was strongest over left posterior sensors. These findings show that the influence of objects on scene representations occurs at similar latencies as the influence of scenes on object representations, in line with a common predictive processing mechanism.


Assuntos
Córtex Visual , Córtex Visual/fisiologia , Reconhecimento Visual de Modelos/fisiologia
9.
Cereb Cortex ; 33(24): 11634-11645, 2023 12 09.
Artigo em Inglês | MEDLINE | ID: mdl-37885126

RESUMO

Recognizing a stimulus as familiar is an important capacity in our everyday life. Recent investigation of visual processes has led to important insights into the nature of the neural representations of familiarity for human faces. Still, little is known about how familiarity affects the neural dynamics of non-face stimulus processing. Here we report the results of an EEG study, examining the representational dynamics of personally familiar scenes. Participants viewed highly variable images of their own apartments and unfamiliar ones, as well as personally familiar and unfamiliar faces. Multivariate pattern analyses were used to examine the time course of differential processing of familiar and unfamiliar stimuli. Time-resolved classification revealed that familiarity is decodable from the EEG data similarly for scenes and faces. The temporal dynamics showed delayed onsets and peaks for scenes as compared to faces. Familiarity information, starting at 200 ms, generalized across stimulus categories and led to a robust familiarity effect. In addition, familiarity enhanced category representations in early (250-300 ms) and later (>400 ms) processing stages. Our results extend previous face familiarity results to another stimulus category and suggest that familiarity as a construct can be understood as a general, stimulus-independent processing step during recognition.


Assuntos
Encéfalo , Reconhecimento Facial , Humanos , Reconhecimento Psicológico , Análise Multivariada , Reconhecimento Visual de Modelos
10.
Cereb Cortex ; 33(9): 5066-5074, 2023 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-36305640

RESUMO

Objects are fundamental to scene understanding. Scenes are defined by embedded objects and how we interact with them. Paradoxically, scene processing in the brain is typically discussed in contrast to object processing. Using the BOLD5000 dataset (Chang et al., 2019), we examined whether objects within a scene predicted the neural representation of scenes, as measured by functional magnetic resonance imaging in humans. Stimuli included 1,179 unique scenes across 18 semantic categories. Object composition of scenes were compared across scene exemplars in different semantic scene categories, and separately, in exemplars of the same scene category. Neural representations in scene- and object-preferring brain regions were significantly related to which objects were in a scene, with the effect at times stronger in the scene-preferring regions. The object model accounted for more variance when comparing scenes within the same semantic category to scenes from different categories. Here, we demonstrate the function of scene-preferring regions includes the processing of objects. This suggests visual processing regions may be better characterized by the processes, which are engaged when interacting with the stimulus kind, such as processing groups of objects in scenes, or processing a single object in our foreground, rather than the stimulus kind itself.


Assuntos
Mapeamento Encefálico , Reconhecimento Visual de Modelos , Humanos , Estimulação Luminosa/métodos , Encéfalo , Percepção Visual , Imageamento por Ressonância Magnética
11.
Conscious Cogn ; 122: 103695, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38761426

RESUMO

People's memory for scenes has consequences, including for eyewitness testimony. Negative scenes may lead to a particular memory error, where narrowed scene boundaries lead people to recall being closer to a scene than they were. But boundary restriction-including attenuation of the opposite phenomenon boundary extension-has been difficult to replicate, perhaps because heightened arousal accompanying negative scenes, rather than negative valence itself, drives the effect. Indeed, in Green et al. (2019) arousal alone, conditioned to a particular neutral image category, increased boundary restriction for images in that category. But systematic differences between image categories may have driven these results, irrespective of arousal. Here, we clarify whether boundary restriction stems from the external arousal stimulus or image category differences. Presenting one image category (everyday-objects), half accompanied by arousal (Experiment 1), and presenting both neutral image categories (everyday-objects, nature), without arousal (Experiment 2), resulted in no difference in boundary judgement errors. These findings suggest that image features-including inherent valence, arousal, and complexity-are not sufficient to induce boundary restriction or reduce boundary extension for neutral images, perhaps explaining why boundary restriction is inconsistently demonstrated in the lab.


Assuntos
Nível de Alerta , Reconhecimento Visual de Modelos , Humanos , Nível de Alerta/fisiologia , Adulto , Feminino , Adulto Jovem , Masculino , Reconhecimento Visual de Modelos/fisiologia , Rememoração Mental/fisiologia
12.
Perception ; 53(3): 211-214, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38130143

RESUMO

For over a quarter-century, the Sky-Tower has dominated the skyline of Auckland Tamaki Makaurau. Despite its imposing height, observers anecdotally report odd fluctuations in how big it appears. From certain angles, it can look positively stumpy. Such misperceptions can be bewildering and perilous when it happens whilst driving. Here, we characterise this strange illusion in the hopes of better understanding its cause.


Assuntos
Ilusões , Humanos
13.
Mem Cognit ; 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38530622

RESUMO

Boundary contraction and extension are two types of scene transformations that occur in memory. In extension, viewers extrapolate information beyond the edges of the image, whereas in contraction, viewers forget information near the edges. Recent work suggests that image composition influences the direction and magnitude of boundary transformation. We hypothesize that selective attention at encoding is an important driver of boundary transformation effects, selective attention to specific objects at encoding leading to boundary contraction. In this study, one group of participants (N = 36) memorized 15 scenes while searching for targets, while a separate group (N = 36) just memorized the scenes. Both groups then drew the scenes from memory with as much object and spatial detail as they could remember. We asked online workers to provide ratings of boundary transformations in the drawings, as well as how many objects they contained and the precision of remembered object size and location. We found that search condition drawings showed significantly greater boundary contraction than drawings of the same scenes in the memorize condition. Search drawings were significantly more likely to contain target objects, and the likelihood to recall other objects in the scene decreased as a function of their distance from the target. These findings suggest that selective attention to a specific object due to a search task at encoding will lead to significant boundary contraction.

14.
Neuroimage ; 269: 119935, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-36764369

RESUMO

Human neuroimaging studies have revealed a dedicated cortical system for visual scene processing. But what is a "scene"? Here, we use a stimulus-driven approach to identify a stimulus feature that selectively drives cortical scene processing. Specifically, using fMRI data from BOLD5000, we examined the images that elicited the greatest response in the cortical scene processing system, and found that there is a common "vertical luminance gradient" (VLG), with the top half of a scene image brighter than the bottom half; moreover, across the entire set of images, VLG systematically increases with the neural response in the scene-selective regions (Study 1). Thus, we hypothesized that VLG is a stimulus feature that selectively engages cortical scene processing, and directly tested the role of VLG in driving cortical scene selectivity using tightly controlled VLG stimuli (Study 2). Consistent with our hypothesis, we found that the scene-selective cortical regions-but not an object-selective region or early visual cortex-responded significantly more to images of VLG over control stimuli with minimal VLG. Interestingly, such selectivity was also found for images with an "inverted" VLG, resembling the luminance gradient in night scenes. Finally, we also tested the behavioral relevance of VLG for visual scene recognition (Study 3); we found that participants even categorized tightly controlled stimuli of both upright and inverted VLG to be a place more than an object, indicating that VLG is also used for behavioral scene recognition. Taken together, these results reveal that VLG is a stimulus feature that selectively engages cortical scene processing, and provide evidence for a recent proposal that visual scenes can be characterized by a set of common and unique visual features.


Assuntos
Imageamento por Ressonância Magnética , Percepção Visual , Humanos , Percepção Visual/fisiologia , Imageamento por Ressonância Magnética/métodos , Reconhecimento Psicológico/fisiologia , Mapeamento Encefálico , Reconhecimento Visual de Modelos/fisiologia , Estimulação Luminosa/métodos
15.
Hippocampus ; 33(5): 522-532, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36728411

RESUMO

For living organisms, the ability to acquire information regarding the external space around them is critical for future actions. While the information must be stored in an allocentric frame to facilitate its use in various spatial contexts, each case of use requires the information to be represented in a particular self-referenced frame. Previous studies have explored neural substrates responsible for the linkage between self-referenced and allocentric spatial representations based on findings in rodents. However, the behaviors of rodents are different from those of primates in several aspects; for example, rodents mainly explore their environments through locomotion, while primates use eye movements. In this review, we discuss the brain mechanisms responsible for the linkage in nonhuman primates. Based on recent physiological studies, we propose that two types of neural substrates link the first-person perspective with allocentric coding. The first is the view-center background signal, which represents an image of the background surrounding the current position of fixation on the retina. This perceptual signal is transmitted from the ventral visual pathway to the hippocampus (HPC) via the perirhinal cortex and parahippocampal cortex. Because images that share the same objective-position in the environment tend to appear similar when seen from different self-positions, the view-center background signals are easily associated with one another in the formation of allocentric position coding and storage. The second type of neural substrate is the HPC neurons' dynamic activity that translates the stored location memory to the first-person perspective depending on the current spatial context.


Assuntos
Memória , Percepção Espacial , Animais , Percepção Espacial/fisiologia , Memória/fisiologia , Lobo Temporal/fisiologia , Primatas/fisiologia , Hipocampo/fisiologia , Roedores
16.
Vis Neurosci ; 40: E001, 2023 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-36752177

RESUMO

Glaucoma is an eye disease characterized by a progressive vision loss usually starting in peripheral vision. However, a deficit for scene categorization is observed even in the preserved central vision of patients with glaucoma. We assessed the processing and integration of spatial frequencies in the central vision of patients with glaucoma during scene categorization, considering the severity of the disease, in comparison to age-matched controls. In the first session, participants had to categorize scenes filtered in low-spatial frequencies (LSFs) and high-spatial frequencies (HSFs) as a natural or an artificial scene. Results showed that the processing of spatial frequencies was impaired only for patients with severe glaucoma, in particular for HFS scenes. In the light of proactive models of visual perception, we investigated how LSF could guide the processing of HSF in a second session. We presented hybrid scenes (combining LSF and HSF from two scenes belonging to the same or different semantic category). Participants had to categorize the scene filtered in HSF while ignoring the scene filtered in LSF. Surprisingly, results showed that the semantic influence of LSF on HSF was greater for patients with early glaucoma than controls, and then disappeared for the severe cases. This study shows that a progressive destruction of retinal ganglion cells affects the spatial frequency processing in central vision. This deficit may, however, be compensated by increased reliance on predictive mechanisms at early stages of the disease which would however decline in more severe cases.


Assuntos
Glaucoma , Percepção Espacial , Humanos , Tempo de Reação , Estimulação Luminosa/métodos , Percepção Visual , Reconhecimento Visual de Modelos
17.
Cogn Emot ; 37(1): 128-136, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36537807

RESUMO

Boundary extension is a memory phenomenon in which an individual reports seeing more of a scene than they actually did. We provide the first examination of boundary extension in individuals diagnosed with depression, hypothesising that an overemphasis on pre-existing schema may enhance boundary extension effects on emotional photographs. The relationship between boundary extension and overgeneralisation in autobiographical memory was also explored. Individuals with (n = 42) and without (n = 41) Major Depressive Disorder completed a camera paradigm task utilising positive, negative, and neutral stimuli. Across all participants, positive (d = 0.37) and negative (d = 0.66) stimuli were extended more than neutral stimuli. This effect did not differ between depressed and never-depressed participants. Across all participants, images containing objects were extended more than images containing faces. An association was also evident between extension effects in memory for perceptual space and extensions of autobiographical memory across time.


Assuntos
Transtorno Depressivo Maior , Memória Episódica , Humanos , Percepção Visual , Emoções
18.
Sensors (Basel) ; 23(9)2023 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-37177619

RESUMO

Single-photon avalanche diodes (SPADs) are novel image sensors that record photons at extremely high sensitivity. To reduce both the required sensor area for readout circuits and the data throughput for SPAD array, in this paper, we propose a snapshot compressive sensing single-photon avalanche diode (CS-SPAD) sensor which can realize on-chip snapshot-type spatial compressive imaging in a compact form. Taking advantage of the digital counting nature of SPAD sensing, we propose to design the circuit connection between the sensing unit and the readout electronics for compressive sensing. To process the compressively sensed data, we propose a convolution neural-network-based algorithm dubbed CSSPAD-Net which could realize both high-fidelity scene reconstruction and classification. To demonstrate our method, we design and fabricate a CS-SPAD sensor chip, build a prototype imaging system, and demonstrate the proposed on-chip snapshot compressive sensing method on the MINIST dataset and real handwritten digital images, with both qualitative and quantitative results.

19.
Sensors (Basel) ; 24(1)2023 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-38203074

RESUMO

Ego-vehicle state prediction represents a complex and challenging problem for self-driving and autonomous vehicles. Sensorial information and on-board cameras are used in perception-based solutions in order to understand the state of the vehicle and the surrounding traffic conditions. Monocular camera-based methods are becoming increasingly popular for driver assistance, with precise predictions of vehicle speed and emergency braking being important for road safety enhancement, especially in the prevention of speed-related accidents. In this research paper, we introduce the implementation of a convolutional neural network (CNN) model tailored for the prediction of vehicle velocity, braking events, and emergency braking, employing sequential image sequences and velocity data as inputs. The CNN model is trained on a dataset featuring sequences of 20 consecutive images and corresponding velocity values, all obtained from a moving vehicle navigating through road-traffic scenarios. The model's primary objective is to predict the current vehicle speed, braking actions, and the occurrence of an emergency-brake situation using the information encoded in the preceding 20 frames. We subject our proposed model to an evaluation on a dataset using regression and classification metrics, and comparative analysis with existing published work based on recurrent neural networks (RNNs). Through our efforts to improve the prediction accuracy for velocity, braking behavior, and emergency-brake events, we make a substantial contribution to improving road safety and offer valuable insights for the development of perception-based techniques in the field of autonomous vehicles.

20.
Neuroimage ; 260: 119506, 2022 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-35878724

RESUMO

Research on face perception has revealed highly specialized visual mechanisms such as configural processing, and provided markers of interindividual differences -including disease risks and alterations- in visuo-perceptual abilities that traffic in social cognition. Is face perception unique in degree or kind of mechanisms, and in its relevance for social cognition? Combining functional MRI and behavioral methods, we address the processing of an uncharted class of socially relevant stimuli: minimal social scenes involving configurations of two bodies spatially close and face-to-face as if interacting (hereafter, facing dyads). We report category-specific activity for facing (vs. non-facing) dyads in visual cortex. That activity shows face-like signatures of configural processing -i.e., stronger response to facing (vs. non-facing) dyads, and greater susceptibility to stimulus inversion for facing (vs. non-facing) dyads-, and is predicted by performance-based measures of configural processing in visual perception of body dyads. Moreover, we observe that the individual performance in body-dyad perception is reliable, stable-over-time and correlated with the individual social sensitivity, coarsely captured by the Autism-Spectrum Quotient. Further analyses clarify the relationship between single-body and body-dyad perception. We propose that facing dyads are processed through highly specialized mechanisms -and brain areas-, analogously to other biologically and socially relevant stimuli such as faces. Like face perception, facing-dyad perception can reveal basic (visual) processes that lay the foundations for understanding others, their relationships and interactions.


Assuntos
Reconhecimento Facial , Córtex Visual , Encéfalo/fisiologia , Humanos , Reconhecimento Visual de Modelos/fisiologia , Percepção Social , Percepção Visual/fisiologia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa