RESUMO
INTRODUCTION/OBJECTIVE: Treatment results of anorexia nervosa (AN) are modest, with fear of weight gain being a strong predictor of treatment outcome and relapse. Here, we present a virtual reality (VR) setup for exposure to healthy weight and evaluate its potential as an adjunct treatment for AN. METHODS: In two studies, we investigate VR experience and clinical effects of VR exposure to higher weight in 20 women with high weight concern or shape concern and in 20 women with AN. RESULTS: In study 1, 90% of participants (18/20) reported symptoms of high arousal but verbalized low to medium levels of fear. Study 2 demonstrated that VR exposure to healthy weight induced high arousal in patients with AN and yielded a trend that four sessions of exposure improved fear of weight gain. Explorative analyses revealed three clusters of individual reactions to exposure, which need further exploration. CONCLUSIONS: VR exposure is a well-accepted and powerful tool for evoking fear of weight gain in patients with AN. We observed a statistical trend that repeated virtual exposure to healthy weight improved fear of weight gain with large effect sizes. Further studies are needed to determine the mechanisms and differential effects.
Assuntos
Anorexia Nervosa , Realidade Virtual , Humanos , Feminino , Anorexia Nervosa/terapia , Medo , Resultado do Tratamento , Aumento de PesoRESUMO
Participants performed a visual-vestibular motor recalibration task in virtual reality. The task consisted of keeping the extended arm and hand stable in space during a whole-body rotation induced by a robotic wheelchair. Performance was first quantified in a pre-test in which no visual feedback was available during the rotation. During the subsequent adaptation phase, optical flow resulting from body rotation was provided. This visual feedback was manipulated to create the illusion of a smaller rotational movement than actually occurred, hereby altering the visual-vestibular mapping. The effects of the adaptation phase on hand stabilization performance were measured during a post-test that was identical to the pre-test. Three different groups of subjects were exposed to different perspectives on the visual scene, i.e., first-person, top view, or mirror view. Sensorimotor adaptation occurred for all three viewpoint conditions, performance in the post-test session showing a marked under-compensation relative to the pre-test performance. In other words, all viewpoints gave rise to a remapping between vestibular input and the motor output required to stabilize the arm. Furthermore, the first-person and mirror view adaptation induced a significant decrease in variability of the stabilization performance. Such variability reduction was not observed for the top view adaptation. These results suggest that even if all three viewpoints can evoke substantial adaptation aftereffects, the more naturalistic first-person view and the richer mirror view should be preferred when reducing motor variability constitutes an important issue.
Assuntos
Adaptação Fisiológica/fisiologia , Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Vestíbulo do Labirinto/fisiologia , Percepção Visual/fisiologia , Adulto , Calibragem , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Orientação , Rotação , Estatística como Assunto , Inquéritos e Questionários , Fatores de Tempo , Interface Usuário-Computador , Adulto JovemRESUMO
It has been suggested that the vestibular system not only plays a role for our sense of balance and postural control but also might modulate higher-order body representations, such as the perceived shape and size of our body. Recent findings using virtual reality (VR) to realistically manipulate the length of whole extremities of first person biometric avatars under vestibular stimulation did not support this assumption. It has been discussed that these negative findings were due to the availability of visual feedback on the subjects' virtual arms and legs. The present study tested this hypothesis by excluding the latter information. A newly recruited group of healthy subjects had to adjust the position of blocks in 3D space of a VR scenario such that they had the feeling that they could just touch them with their left/right hand/heel. Caloric vestibular stimulation did not alter perceived size of own extremities. Findings suggest that vestibular signals do not serve to scale the internal representation of (large parts of) our body's metric properties. This is in obvious contrast to the egocentric representation of our body midline which allows us to perceive and adjust the position of our body with respect to the surroundings. These two qualia appear to belong to different systems of body representation in humans.
Assuntos
Imagem Corporal , Testes Calóricos/métodos , Estimulação Luminosa/métodos , Vestíbulo do Labirinto/fisiologia , Adolescente , Adulto , Tamanho Corporal , Extremidades , Feminino , Voluntários Saudáveis , Humanos , Masculino , Realidade Virtual , Adulto JovemRESUMO
Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating "The Virtual Caliper", which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.
Assuntos
Imageamento Tridimensional/métodos , Realidade Virtual , Antropometria/métodos , Imagem Corporal , Tamanho Corporal , Gráficos por Computador , Sistemas Computacionais , Feminino , Humanos , Masculino , Autoimagem , Software , Interface Usuário-ComputadorRESUMO
BACKGROUND AND PURPOSE: Vestibular input is projected to "multisensory (vestibular) cortex" where it converges with input from other sensory modalities. It has been assumed that this multisensory integration enables a continuous perception of state and presence of one's own body. The present study thus asked whether or not vestibular stimulation may impact this perception. METHODS: We used an immersive virtual reality setup to realistically manipulate the length of extremities of first person biometric avatars. Twenty-two healthy participants had to adjust arms and legs to their correct length from various start lengths before, during, and after vestibular stimulation. RESULTS: Neither unilateral caloric nor galvanic vestibular stimulation had a modulating effect on the perceived size of own extremities. CONCLUSION: Our results suggest that vestibular stimulation does not directly influence the explicit somatosensory representation of our body. It is possible that in non-brain-damaged, healthy subjects, changes in whole body size perception are principally not mediated by vestibular information. Alternatively, visual feedback and/or memory may dominate multisensory integration and thereby override possibly existing modulations of body perception by vestibular stimulation. The present observations suggest that multisensory integration and not the processing of a single sensory input is the crucial mechanism in generating our body representation in relation to the external world.
Assuntos
Imagem Corporal/psicologia , Vestíbulo do Labirinto/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Braço/anatomia & histologia , Biometria , Feminino , Humanos , Perna (Membro)/anatomia & histologia , Masculino , Imagens de Fantasmas , Estimulação Luminosa , Córtex Somatossensorial/fisiologia , Realidade Virtual , Adulto JovemRESUMO
Humans can recognize emotions expressed through body motion with high accuracy even when the stimuli are impoverished. However, most of the research on body motion has relied on exaggerated displays of emotions. In this paper we present two experiments where we investigated whether emotional body expressions could be recognized when they were recorded during natural narration. Our actors were free to use their entire body, face, and voice to express emotions, but our resulting visual stimuli used only the upper body motion trajectories in the form of animated stick figures. Observers were asked to perform an emotion recognition task on short motion sequences using a large and balanced set of emotions (amusement, joy, pride, relief, surprise, anger, disgust, fear, sadness, shame, and neutral). Even with only upper body motion available, our results show recognition accuracy significantly above chance level and high consistency rates among observers. In our first experiment, that used more classic emotion induction setup, all emotions were well recognized. In the second study that employed narrations, four basic emotion categories (joy, anger, fear, and sadness), three non-basic emotion categories (amusement, pride, and shame) and the "neutral" category were recognized above chance. Interestingly, especially in the second experiment, observers showed a bias toward anger when recognizing the motion sequences for emotions. We discovered that similarities between motion sequences across the emotions along such properties as mean motion speed, number of peaks in the motion trajectory and mean motion span can explain a large percent of the variation in observers' responses. Overall, our results show that upper body motion is informative for emotion recognition in narrative scenarios.