Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 380
Filtrar
Mais filtros

Intervalo de ano de publicação
1.
Cereb Cortex ; 34(1)2024 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-37991278

RESUMO

The hippocampus is largely recognized for its integral contributions to memory processing. By contrast, its role in perceptual processing remains less clear. Hippocampal properties vary along the anterior-posterior (AP) axis. Based on past research suggesting a gradient in the scale of features processed along the AP extent of the hippocampus, the representations have been proposed to vary as a function of granularity along this axis. One way to quantify such granularity is with population receptive field (pRF) size measured during visual processing, which has so far received little attention. In this study, we compare the pRF sizes within the hippocampus to its activation for images of scenes versus faces. We also measure these functional properties in surrounding medial temporal lobe (MTL) structures. Consistent with past research, we find pRFs to be larger in the anterior than in the posterior hippocampus. Critically, our analysis of surrounding MTL regions, the perirhinal cortex, entorhinal cortex, and parahippocampal cortex shows a similar correlation between scene sensitivity and larger pRF size. These findings provide conclusive evidence for a tight relationship between the pRF size and the sensitivity to image content in the hippocampus and adjacent medial temporal cortex.


Assuntos
Imageamento por Ressonância Magnética , Lobo Temporal , Imageamento por Ressonância Magnética/métodos , Lobo Temporal/fisiologia , Hipocampo/fisiologia , Córtex Entorrinal/fisiologia , Memória/fisiologia
2.
J Neurophysiol ; 131(4): 619-625, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38416707

RESUMO

To create coherent visual experiences, the brain spatially integrates the complex and dynamic information it receives from the environment. We previously demonstrated that feedback-related alpha activity carries stimulus-specific information when two spatially and temporally coherent naturalistic inputs can be integrated into a unified percept. In this study, we sought to determine whether such integration-related alpha dynamics are triggered by categorical coherence in visual inputs. In an EEG experiment, we manipulated the degree of coherence by presenting pairs of videos from the same or different categories through two apertures in the left and right visual hemifields. Critically, video pairs could be video-level coherent (i.e., stem from the same video), coherent in their basic-level category, coherent in their superordinate category, or incoherent (i.e., stem from videos from two entirely different categories). We conducted multivariate classification analyses on rhythmic EEG responses to decode between the video stimuli in each condition. As the key result, we significantly decoded the video-level coherent and basic-level coherent stimuli, but not the superordinate coherent and incoherent stimuli, from cortical alpha rhythms. This suggests that alpha dynamics play a critical role in integrating information across space, and that cortical integration processes are flexible enough to accommodate information from different exemplars of the same basic-level category.NEW & NOTEWORTHY Our brain integrates dynamic inputs across the visual field to create coherent visual experiences. Such integration processes have previously been linked to cortical alpha dynamics. In this study, the integration-related alpha activity was observed not only when snippets from the same video were presented, but also when different video snippets from the same basic-level category were presented, highlighting the flexibility of neural integration processes.


Assuntos
Córtex Visual , Campos Visuais , Córtex Visual/fisiologia , Ritmo alfa , Encéfalo , Mapeamento Encefálico
3.
Proc Natl Acad Sci U S A ; 118(7)2021 02 16.
Artigo em Inglês | MEDLINE | ID: mdl-33574061

RESUMO

In mammals with frontal eyes, optic-nerve fibers from nasal retina project to the contralateral hemisphere of the brain, and fibers from temporal retina project ipsilaterally. The division between crossed and uncrossed projections occurs at or near the vertical meridian. If the division was precise, a problem would arise. Small objects near midline, but nearer or farther than current fixation, would produce signals that travel to opposite hemispheres, making the binocular disparity of those objects difficult to compute. However, in species that have been studied, the division is not precise. Rather, there are overlapping crossed and uncrossed projections such that some fibers from nasal retina project ipsilaterally as well as contralaterally and some from temporal retina project contralaterally as well as ipsilaterally. This increases the probability that signals from an object near vertical midline travel to the same hemisphere, thereby aiding disparity estimation. We investigated whether there is a deficit in binocular vision near the vertical meridian in humans and found no evidence for one. We also investigated the effectiveness of the observed decussation pattern, quantified from anatomical data in monkeys and humans. We used measurements of naturally occurring disparities in humans to determine disparity distributions across the visual field. We then used those distributions to calculate the probability of natural disparities transmitting to the same hemisphere, thereby aiding disparity computation. We found that the pattern of overlapping projections is quite effective. Thus, crossed and uncrossed projections from the retinas are well designed for aiding disparity estimation and stereopsis.


Assuntos
Adaptação Fisiológica , Percepção de Profundidade , Retina/fisiologia , Percepção Visual , Adulto , Animais , Encéfalo/fisiologia , Meio Ambiente , Humanos , Macaca mulatta , Masculino , Vias Visuais/fisiologia
4.
Sensors (Basel) ; 24(12)2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38931621

RESUMO

Virtualization plays a critical role in enriching the user experience in Virtual Reality (VR) by offering heightened realism, increased immersion, safer navigation, and newly achievable levels of interaction and personalization, specifically in indoor environments. Traditionally, the creation of virtual content has fallen under one of two broad categories: manual methods crafted by graphic designers, which are labor-intensive and sometimes lack precision; traditional Computer Vision (CV) and Deep Learning (DL) frameworks that frequently result in semi-automatic and complex solutions, lacking a unified framework for both 3D reconstruction and scene understanding, often missing a fully interactive representation of the objects and neglecting their appearance. To address these diverse challenges and limitations, we introduce the Virtual Experience Toolkit (VET), an automated and user-friendly framework that utilizes DL and advanced CV techniques to efficiently and accurately virtualize real-world indoor scenarios. The key features of VET are the use of ScanNotate, a retrieval and alignment tool that enhances the precision and efficiency of its precursor, supported by upgrades such as a preprocessing step to make it fully automatic and a preselection of a reduced list of CAD to speed up the process, and the implementation in a user-friendly and fully automatic Unity3D application that guides the users through the whole pipeline and concludes in a fully interactive and customizable 3D scene. The efficacy of VET is demonstrated using a diversified dataset of virtualized 3D indoor scenarios, supplementing the ScanNet dataset.

5.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931766

RESUMO

Currently, complex scene classification strategies are limited to high-definition image scene sets, and low-quality scene sets are overlooked. Although a few studies have focused on artificially noisy images or specific image sets, none have involved actual low-resolution scene images. Therefore, designing classification models around practicality is of paramount importance. To solve the above problems, this paper proposes a two-stage classification optimization algorithm model based on MPSO, thus achieving high-precision classification of low-quality scene images. Firstly, to verify the rationality of the proposed model, three groups of internationally recognized scene datasets were used to conduct comparative experiments with the proposed model and 21 existing methods. It was found that the proposed model performs better, especially in the 15-scene dataset, with 1.54% higher accuracy than the best existing method ResNet-ELM. Secondly, to prove the necessity of the pre-reconstruction stage of the proposed model, the same classification architecture was used to conduct comparative experiments between the proposed reconstruction method and six existing preprocessing methods on the seven self-built low-quality news scene frames. The results show that the proposed model has a higher improvement rate for outdoor scenes. Finally, to test the application potential of the proposed model in outdoor environments, an adaptive test experiment was conducted on the two self-built scene sets affected by lighting and weather. The results indicate that the proposed model is suitable for weather-affected scene classification, with an average accuracy improvement of 1.42%.

6.
Hippocampus ; 33(5): 635-645, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36762712

RESUMO

We consider a model of associative storage and retrieval of compositional memories in an extended cortical network. Our model network is comprised of Potts units, which represent patches of cortex, interacting through long-range connections. The critical assumption is that a memory, for example of a spatial view, is composed of a limited number of items, each of which has a pre-established representation: storing a new memory only involves acquiring the connections, if novel, among the participating items. The model is shown to have a much lower storage capacity than when it stores simple unitary representations. It is also shown that an input from the hippocampus facilitates associative retrieval. When it is absent, it is advantageous to cue rare rather than frequent items. The implications of these results for emerging trends in empirical research are discussed.


Assuntos
Hipocampo , Rememoração Mental , Modelos Neurológicos
7.
Proc Biol Sci ; 290(2011): 20231676, 2023 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-38018112

RESUMO

The colours of surfaces in a scene may not appear constant with a change in the colour of the illumination. Yet even when colour constancy fails, human observers can usually discriminate changes in lighting from changes in surface reflecting properties. This operational ability has been attributed to the constancy of perceived colour relations between surfaces under illuminant changes, in turn based on approximately invariant spatial ratios of cone photoreceptor excitations. Natural deviations in these ratios may, however, lead to illuminant changes being misidentified. The aim of this work was to test whether such misidentifications occur with natural scenes and whether they are due to failures in relational colour constancy. Pairs of scene images from hyperspectral data were presented side-by-side on a computer-controlled display. On one side, the scene underwent illuminant changes and on the other side, it underwent the same changes but with images corrected for any residual deviations in spatial ratios. Observers systematically misidentified the corrected images as being due to illuminant changes. The frequency of errors increased with the size of the deviations, which were closely correlated with the estimated failures in relational colour constancy.


Assuntos
Percepção de Cores , Iluminação , Humanos , Cor , Estimulação Luminosa , Células Fotorreceptoras Retinianas Cones
8.
Exp Brain Res ; 241(9): 2345-2360, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37610677

RESUMO

Pseudoneglect, that is the tendency to pay more attention to the left side of space, is typically assessed with paper-and-pencil tasks, particularly line bisection. In the present study, we used an everyday task with more complex stimuli. Subjects' task was to look for pre-specified objects in images of real-world scenes. In half of the scenes, the search object was located on the left side of the image (L-target); in the other half of the scenes, the target was on the right side (R-target). To control for left-right differences in the composition of the scenes, half of the scenes were mirrored horizontally. Eye-movement recordings were used to track the course of pseudoneglect on a millisecond timescale. Subjects' initial eye movements were biased to the left of the scene, but less so for R-targets than for L-targets, indicating that pseudoneglect was modulated by task demands and scene guidance. We further analyzed how horizontal gaze positions changed over time. When the data for L- and R-targets were pooled, the leftward bias lasted, on average, until the first second of the search process came to an end. Even for right-side targets, the gaze data showed an early left-bias, which was compensated by adjustments in the direction and amplitude of later saccades. Importantly, we found that pseudoneglect affected search efficiency by leading to less efficient scan paths and consequently longer search times for R-targets compared with L-targets. It may therefore be prudent to take spatial asymmetries into account when studying visual search in scenes.


Assuntos
Movimentos Oculares , Movimentos Sacádicos , Humanos
9.
Dev Sci ; 26(6): e13402, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37138516

RESUMO

Visual perception in adult humans is thought to be tuned to represent the statistical regularities of natural scenes. For example, in adults, visual sensitivity to different hues shows an asymmetry which coincides with the statistical regularities of colour in the natural world. Infants are sensitive to statistical regularities in social and linguistic stimuli, but whether or not infants' visual systems are tuned to natural scene statistics is currently unclear. We measured colour discrimination in infants to investigate whether or not the visual system can represent chromatic scene statistics in very early life. Our results reveal the earliest association between vision and natural scene statistics that has yet been found: even as young as 4 months of age, colour vision is aligned with the distributions of colours in natural scenes. RESEARCH HIGHLIGHTS: We find infants' colour sensitivity is aligned with the distribution of colours in the natural world, as it is in adults. At just 4 months, infants' visual systems are tailored to extract and represent the statistical regularities of the natural world. This points to a drive for the human brain to represent statistical regularities even at a young age.

10.
Perception ; 52(4): 238-254, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36788004

RESUMO

Categorical color constancy has been widely investigated and found to be very robust. As one of object material properties, the surface gloss was found to barely contribute to color constancy in a natural viewing condition. In this study, the effect of surface gloss on categorical color constancy was investigated by asking eight observers to categorize 208 Munsell matte surfaces and 260 Munsell glossy surfaces under D65, F, and TL84 illuminants in a viewing chamber with a uniform gray background. A color constancy index based on the centroid shift of the color category was used to evaluate color constancy degree of each color category across illumination changes from D65 to F or TL84 illuminant. The result showed that both matte and glossy surfaces showed almost perfect color constancy on all color categories under F and TL84 illuminants, and there was no significant difference between them. This result suggests that surface gloss has little effect on categorical color constancy in a uniform gray background where the local surround cue was present, which is consistent with the previous findings.


Assuntos
Percepção de Cores , Iluminação , Humanos , Estimulação Luminosa , Cor
11.
Mem Cognit ; 2023 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-37770695

RESUMO

Searching within natural scenes can induce incidental encoding of information about the scene and the target, particularly when the scene is complex or repeated. However, recent evidence from attribute amnesia (AA) suggests that in some situations, searchers can find a target without building a robust incidental memory of it's task relevant features. Through drawing-based visual recall and an AA search task, we investigated whether search in natural scenes necessitates memory encoding. Participants repeatedly searched for and located an easily detected item in novel scenes for numerous trials before being unexpectedly prompted to draw either the entire scene (Experiment 1) or their search target (Experiment 2) directly after viewing the search image. Naïve raters assessed the similarity of the drawings to the original information. We found that surprise-trial drawings of the scene and search target were both poorly recognizable, but the same drawers produced highly recognizable drawings on the next trial when they had an expectation to draw the image. Experiment 3 further showed that the poor surprise trial memory could not merely be attributed to interference from the surprising event. Our findings suggest that even for searches done in natural scenes, it is possible to locate a target without creating a robust memory of either it or the scene it was in, even if attended to just a few seconds prior. This disconnection between attention and memory might reflect a fundamental property of cognitive computations designed to optimize task performance and minimize resource use.

12.
Mem Cognit ; 51(2): 349-370, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36100821

RESUMO

In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.


Assuntos
Memória de Longo Prazo , Percepção Visual , Humanos , Cognição , Sinais (Psicologia) , Reconhecimento Psicológico
13.
Proc Natl Acad Sci U S A ; 117(47): 29354-29362, 2020 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-33229533

RESUMO

Space-related processing recruits a network of brain regions separate from those recruited in object processing. This dissociation has largely been explored by contrasting views of navigable-scale spaces to views of close-up, isolated objects. However, in naturalistic visual experience, we encounter spaces intermediate to these extremes, like the tops of desks and kitchen counters, which are not navigable but typically contain multiple objects. How are such reachable-scale views represented in the brain? In three human functional neuroimaging experiments, we find evidence for a large-scale dissociation of reachable-scale views from both navigable scene views and close-up object views. Three brain regions were identified that showed a systematic response preference to reachable views, located in the posterior collateral sulcus, the inferior parietal sulcus, and superior parietal lobule. Subsequent analyses suggest that these three regions may be especially sensitive to the presence of multiple objects. Further, in all classic scene and object regions, reachable-scale views dissociated from both objects and scenes with an intermediate response magnitude. Taken together, these results establish that reachable-scale environments have a distinct representational signature from both scene and object views in visual cortex.


Assuntos
Reconhecimento Visual de Modelos/fisiologia , Percepção Espacial/fisiologia , Córtex Visual/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa/métodos , Córtex Visual/diagnóstico por imagem
14.
Proc Natl Acad Sci U S A ; 117(24): 13821-13827, 2020 06 16.
Artigo em Inglês | MEDLINE | ID: mdl-32513698

RESUMO

Color ignites visual experience, imbuing the world with meaning, emotion, and richness. As soon as an observer opens their eyes, they have the immediate impression of a rich, colorful experience that encompasses their entire visual world. Here, we show that this impression is surprisingly inaccurate. We used head-mounted virtual reality (VR) to place observers in immersive, dynamic real-world environments, which they naturally explored via saccades and head turns. Meanwhile, we monitored their gaze with in-headset eye tracking and then systematically altered the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. We found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color. This limitation on perceptual awareness could not be explained by retinal neuroanatomy or previous studies of peripheral visual processing using more traditional psychophysical approaches. In a second study, we measured color detection thresholds using a staircase procedure while a set of observers intentionally attended to the periphery. Still, we found that observers were unaware when a large portion of their field of view was desaturated. Together, these results show that during active, naturalistic viewing conditions, our intuitive sense of a rich, colorful visual world is largely incorrect.


Assuntos
Cor , Visão Ocular , Adolescente , Adulto , Atenção , Conscientização , Feminino , Humanos , Masculino , Realidade Virtual , Adulto Jovem
15.
Sensors (Basel) ; 23(7)2023 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-37050493

RESUMO

This paper solves the problem of depth completion learning from sparse depth maps and RGB images. Specifically, a real-time unsupervised depth completion method in dynamic scenes guided by visual inertial system and confidence is described. The problems such as occlusion (dynamic scenes), limited computational resources and unlabeled training samples can be better solved in our method. The core of our method is a new compact network, which uses images, pose and confidence guidance to perform depth completion. Since visual-inertial information is considered as the only source of supervision, the loss function of confidence guidance is creatively designed. Especially for the problem of pixel mismatch caused by object motion and occlusion in dynamic scenes, we divide the images into static, dynamic and occluded regions, and design loss functions to match each region. Our experimental results in dynamic datasets and real dynamic scenes show that this regularization alone is sufficient to train depth completion models. Our depth completion network exceeds the accuracy achieved in prior work for unsupervised depth completion, and only requires a small number of parameters.

16.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850474

RESUMO

The end-operation accuracy of the satellite-borne robotic arm is closely related to the satellite attitude control accuracy, and the influence of the vibration of the satellite's flexural structure on the satellite attitude control is not negligible. Therefore, a stable and reliable vibration frequency identification method of the satellite flexural structure is needed. Different from the traditional non-contact measurement and identification methods of large flexible space structures based on marker points or edge corner points, the condition of non-marker points relying on texture features can identify more feature points, but there are problems such as low recognition and poor matching of features. Given this, the concept of 'the comprehensive matching parameter' of scenes is proposed to describe the scene characteristics of non-contact optical measurement from the two dimensions of recognition and matching. The basic connotation and evaluation index of the concept are also given in the paper. Guided by this theory, the recognition accuracy and matching uniqueness of features can be improved by means of equivalent spatial transformation and novel relative position relationship descriptor. The above problems in non-contact measurement technology can be solved only through algorithm improvement without adding hardware devices. On this basis, the Eigensystem Realization Algorithm (ERA) method is used to obtain the modal parameters of the large flexible space structure. Finally, the effectiveness and superiority of the proposed method are verified by mathematical simulation and ground testing.

17.
Sensors (Basel) ; 23(8)2023 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-37112449

RESUMO

The posterior-to-anterior shift in aging (PASA) effect is seen as a compensatory model that enables older adults to meet increased cognitive demands to perform comparably as their young counterparts. However, empirical support for the PASA effect investigating age-related changes in the inferior frontal gyrus (IFG), hippocampus, and parahippocampus has yet to be established. 33 older adults and 48 young adults were administered tasks sensitive to novelty and relational processing of indoor/outdoor scenes in a 3-Tesla MRI scanner. Functional activation and connectivity analyses were applied to examine the age-related changes on the IFG, hippocampus, and parahippocampus among low/high-performing older adults and young adults. Significant parahippocampal activation was generally found in both older (high-performing) and young adults for novelty and relational processing of scenes. Younger adults had significantly greater IFG and parahippocampal activation than older adults, and greater parahippocampal activation compared to low-performing older adults for relational processing-providing partial support for the PASA model. Observations of significant functional connectivity within the medial temporal lobe and greater negative left IFG-right hippocampus/parahippocampus functional connectivity for young compared to low-performing older adults for relational processing also supports the PASA effect partially.


Assuntos
Mapeamento Encefálico , Lobo Temporal , Adulto Jovem , Humanos , Idoso , Lobo Temporal/fisiologia , Hipocampo , Córtex Pré-Frontal/fisiologia , Envelhecimento/fisiologia , Imageamento por Ressonância Magnética
18.
Sensors (Basel) ; 23(20)2023 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-37896492

RESUMO

In the field of intelligent vehicle technology, there is a high dependence on images captured under challenging conditions to develop robust perception algorithms. However, acquiring these images can be both time-consuming and dangerous. To address this issue, unpaired image-to-image translation models offer a solution by synthesizing samples of the desired domain, thus eliminating the reliance on ground truth supervision. However, the current methods predominantly focus on single projections rather than multiple solutions, not to mention controlling the direction of generation, which creates a scope for enhancement. In this study, we propose a generative adversarial network (GAN)-based model, which incorporates both a style encoder and a content encoder, specifically designed to extract relevant information from an image. Further, we employ a decoder to reconstruct an image using these encoded features, while ensuring that the generated output remains within a permissible range by applying a self-regression module to constrain the style latent space. By modifying the hyperparameters, we can generate controllable outputs with specific style codes. We evaluate the performance of our model by generating snow scenes on the Cityscapes and the EuroCity Persons datasets. The results reveal the effectiveness of our proposed methodology, thereby reinforcing the benefits of our approach in the ongoing evolution of intelligent vehicle technology.

19.
Sensors (Basel) ; 23(21)2023 Oct 30.
Artigo em Inglês | MEDLINE | ID: mdl-37960535

RESUMO

Scene classification in autonomous navigation is a highly complex task due to variations, such as light conditions and dynamic objects, in the inspected scenes; it is also a challenge for small-factor computers to run modern and highly demanding algorithms. In this contribution, we introduce a novel method for classifying scenes in simultaneous localization and mapping (SLAM) using the boundary object function (BOF) descriptor on RGB-D points. Our method aims to reduce complexity with almost no performance cost. All the BOF-based descriptors from each object in a scene are combined to define the scene class. Instead of traditional image classification methods such as ORB or SIFT, we use the BOF descriptor to classify scenes. Through an RGB-D camera, we capture points and adjust them onto layers than are perpendicular to the camera plane. From each plane, we extract the boundaries of objects such as furniture, ceilings, walls, or doors. The extracted features compose a bag of visual words classified by a support vector machine. The proposed method achieves almost the same accuracy in scene classification as a SIFT-based algorithm and is 2.38× faster. The experimental results demonstrate the effectiveness of the proposed method in terms of accuracy and robustness for the 7-Scenes and SUNRGBD datasets.

20.
Behav Res Methods ; 2023 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-37845424

RESUMO

Episodic memory may essentially be memory for one's place within a temporally unfolding scene from a first-person perspective. Given this, pervasively used static stimuli may only capture one small part of episodic memory. A promising approach for advancing the study of episodic memory is immersing participants within varying scenes from a first-person perspective. We present a pool of distinct scene stimuli for use in virtual environments and a paradigm that is implementable across varying levels of immersion on multiple virtual reality (VR) platforms and adaptable to studying various aspects of scene and episodic memory. In our task, participants are placed within a series of virtual environments from a first-person perspective and guided through a virtual tour of scenes during a study phase and a test phase. In the test phase, some scenes share a spatial layout with studied scenes; others are completely novel. In three experiments with varying degrees of immersion, we measure scene recall, scene familiarity-detection during recall failure, the subjective experience of déjà vu, the ability to predict the next turn on a tour, the subjective sense of being able to predict the next turn on a tour, and the factors that influence memory search and the inclination to generate candidate recollective information. The level of first-person immersion mattered to multiple facets of episodic memory. The paradigm presents a useful means of advancing mechanistic understanding of how memory operates in realistic dynamic scene environments, including in combination with cognitive neuroscience methods such as functional magnetic resonance imaging and electrophysiology.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA