Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 11(3): e0151487, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26990298

RESUMO

Virtual Reality (VR) has emerged as a promising tool in many domains of therapy and rehabilitation, and has recently attracted the attention of researchers and clinicians working with elderly people with MCI, Alzheimer's disease and related disorders. Here we present a study testing the feasibility of using highly realistic image-based rendered VR with patients with MCI and dementia. We designed an attentional task to train selective and sustained attention, and we tested a VR and a paper version of this task in a single-session within-subjects design. Results showed that participants with MCI and dementia reported to be highly satisfied and interested in the task, and they reported high feelings of security, low discomfort, anxiety and fatigue. In addition, participants reported a preference for the VR condition compared to the paper condition, even if the task was more difficult. Interestingly, apathetic participants showed a preference for the VR condition stronger than that of non-apathetic participants. These findings suggest that VR-based training can be considered as an interesting tool to improve adherence to cognitive training in elderly people with cognitive impairment.


Assuntos
Disfunção Cognitiva/psicologia , Demência/psicologia , Terapia de Exposição à Realidade Virtual/métodos , Idoso , Idoso de 80 Anos ou mais , Apatia , Atenção , Disfunção Cognitiva/terapia , Demência/terapia , Estudos de Viabilidade , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Desempenho Psicomotor , Autorrelato , Inquéritos e Questionários , Fatores de Tempo
2.
Neuropsychiatr Dis Treat ; 11: 557-63, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25834437

RESUMO

BACKGROUND: Virtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories. METHODS: Eighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant's home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the "remember/know" procedure. RESULTS: All subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P<0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the experiment across the VR conditions. VR stimulates autobiographical memory, as demonstrated by the increased total number of responses on the autobiographical fluency task and the increased number of conscious recollections of memories for familiar versus unknown scenes (P<0.01). CONCLUSION: The study indicates that VR using the FamIBVE system is well tolerated by the elderly. VR can also stimulate recollections of autobiographical memory and convey familiarity of a given scene, which is an essential requirement for use of VR during reminiscence therapy.

3.
Multisens Res ; 26(4): 347-70, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24319928

RESUMO

In a natural environment, affective information is perceived via multiple senses, mostly audition and vision. However, the impact of multisensory information on affect remains relatively undiscovered. In this study, we investigated whether the auditory-visual presentation of aversive stimuli influences the experience of fear. We used the advantages of virtual reality to manipulate multisensory presentation and to display potentially fearful dog stimuli embedded in a natural context. We manipulated the affective reactions evoked by the dog stimuli by recruiting two groups of participants: dog-fearful and non-fearful participants. The sensitivity to dog fear was assessed psychometrically by a questionnaire and also at behavioral and subjective levels using a Behavioral Avoidance Test (BAT). Participants navigated in virtual environments, in which they encountered virtual dog stimuli presented through the auditory channel, the visual channel or both. They were asked to report their fear using Subjective Units of Distress. We compared the fear for unimodal (visual or auditory) and bimodal (auditory-visual) dog stimuli. Dog-fearful participants as well as non-fearful participants reported more fear in response to bimodal audiovisual compared to unimodal presentation of dog stimuli. These results suggest that fear is more intense when the affective information is processed via multiple sensory pathways, which might be due to a cross-modal potentiation. Our findings have implications for the field of virtual reality-based therapy of phobias. Therapies could be refined and improved by implicating and manipulating the multisensory presentation of the feared situations.


Assuntos
Percepção Auditiva/fisiologia , Medo/fisiologia , Tempo de Reação/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Animais , Cães , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Interface Usuário-Computador
4.
ACM Trans Graph ; 32(4)2013 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-24273376

RESUMO

Image-based rendering (IBR) creates realistic images by enriching simple geometries with photographs, e.g., mapping the photograph of a building façade onto a plane. However, as soon as the viewer moves away from the correct viewpoint, the image in the retina becomes distorted, sometimes leading to gross misperceptions of the original geometry. Two hypotheses from vision science state how viewers perceive such image distortions, one claiming that they can compensate for them (and therefore perceive scene geometry reasonably correctly), and one claiming that they cannot compensate (and therefore can perceive rather significant distortions). We modified the latter hypothesis so that it extends to street-level IBR. We then conducted a rigorous experiment that measured the magnitude of perceptual distortions that occur with IBR for façade viewing. We also conducted a rating experiment that assessed the acceptability of the distortions. The results of the two experiments were consistent with one another. They showed that viewers' percepts are indeed distorted, but not as severely as predicted by the modified vision science hypothesis. From our experimental results, we develop a predictive model of distortion for street-level IBR, which we use to provide guidelines for acceptability of virtual views and for capture camera density. We perform a confirmatory study to validate our predictions, and illustrate their use with an application that guides users in IBR navigation to stay in regions where virtual views yield acceptable perceptual distortions.

5.
Cyberpsychol Behav Soc Netw ; 16(2): 145-52, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23425570

RESUMO

Traditionally, virtual reality (VR) exposure-based treatment concentrates primarily on the presentation of a high-fidelity visual experience. However, adequately combining the visual and the auditory experience provides a powerful tool to enhance sensory processing and modulate attention. We present the design and usability testing of an auditory-visual interactive environment for investigating VR exposure-based treatment for cynophobia. The specificity of our application involves 3D sound, allowing the presentation and spatial manipulations of a fearful stimulus in the auditory modality and in the visual modality. We conducted an evaluation test with 10 participants who fear dogs to assess the capacity of our auditory-visual virtual environment (VE) to generate fear reactions. The specific perceptual characteristics of the dog model that were implemented in the VE were highly arousing, suggesting that VR is a promising tool to treat cynophobia.


Assuntos
Simulação por Computador , Medo/psicologia , Transtornos Fóbicos/terapia , Interface Usuário-Computador , Animais , Cães , Humanos , Transtornos Fóbicos/psicologia
6.
IEEE Trans Vis Comput Graph ; 19(2): 210-24, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22508899

RESUMO

Intrinsic images aim at separating an image into its reflectance and illumination components to facilitate further analysis or manipulation. This separation is severely ill posed and the most successful methods rely on user indications or precise geometry to resolve the ambiguities inherent to this problem. In this paper, we propose a method to estimate intrinsic images from multiple views of an outdoor scene without the need for precise geometry and with a few manual steps to calibrate the input. We use multiview stereo to automatically reconstruct a 3D point cloud of the scene. Although this point cloud is sparse and incomplete, we show that it provides the necessary information to compute plausible sky and indirect illumination at each 3D point. We then introduce an optimization method to estimate sun visibility over the point cloud. This algorithm compensates for the lack of accurate geometry and allows the extraction of precise shadows in the final image. We finally propagate the information computed over the sparse point cloud to every pixel in the photograph using image-guided propagation. Our propagation not only separates reflectance from illumination, but also decomposes the illumination into a sun, sky, and indirect layer. This rich decomposition allows novel image manipulations as demonstrated by our results.

7.
Stud Health Technol Inform ; 181: 238-42, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22954863

RESUMO

Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.


Assuntos
Dessensibilização Psicológica/métodos , Cães , Emoções , Transtornos Fóbicos/reabilitação , Interface Usuário-Computador , Estimulação Acústica , Animais , Medo , Feminino , Humanos , Masculino , Estimulação Luminosa , Estatísticas não Paramétricas , Inquéritos e Questionários
8.
IEEE Trans Vis Comput Graph ; 18(4): 546-54, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22402681

RESUMO

Immersive spaces such as 4-sided displays with stereo viewing and high-quality tracking provide a very engaging and realistic virtual experience. However, walking is inherently limited by the restricted physical space, both due to the screens (limited translation) and the missing back screen (limited rotation). In this paper, we propose three novel locomotion techniques that have three concurrent goals: keep the user safe from reaching the translational and rotational boundaries; increase the amount of real walking and finally, provide a more enjoyable and ecological interaction paradigm compared to traditional controller-based approaches. We notably introduce the "Virtual Companion", which uses a small bird to guide the user through VEs larger than the physical space. We evaluate the three new techniques through a user study with travel-to-target and path following tasks. The study provides insight into the relative strengths of each new technique for the three aforementioned goals. Specifically, if speed and accuracy are paramount, traditional controller interfaces augmented with our novel warning techniques may be more appropriate; if physical walking is more important, two of our paradigms (extended Magic Barrier Tape and Constrained Wand) should be preferred; last, fun and ecological criteria would favor the Virtual Companion.


Assuntos
Interface Usuário-Computador , Caminhada , Local de Trabalho , Gráficos por Computador , Humanos , Percepção Espacial
9.
IEEE Trans Vis Comput Graph ; 17(10): 1459-74, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21041884

RESUMO

We present an image-based approach to relighting photographs of tree canopies. Our goal is to minimize capture overhead; thus the only input required is a set of photographs of the tree taken at a single time of day, while allowing relighting at any other time. We first analyze lighting in a tree canopy both theoretically and using simulations. From this analysis, we observe that tree canopy lighting is similar to volumetric illumination. We assume a single-scattering volumetric lighting model for tree canopies, and diffuse leaf reflectance; we validate our assumptions with synthetic renderings. We create a volumetric representation of the tree from 10-12 images taken at a single time of day and use a single-scattering participating media lighting model. An analytical sun and sky illumination model provides consistent representation of lighting for the captured input and unknown target times. We relight the input image by applying a ratio of the target and input time lighting representations. We compute this representation efficiently by simultaneously coding transmittance from the sky and to the eye in spherical harmonics. We validate our method by relighting images of synthetic trees and comparing to path-traced solutions. We also present results for photographs, validating with time-lapse ground truth sequences.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA