RESUMO
Humans are the most versatile tool users among animals. Accordingly, our manual skills evolved alongside the shape of the hand. In the future, further evolution may take place: humans may merge with their tools, and technology may integrate into our biology in a way that blurs the line between the two. So, the question is whether humans can embody a bionic tool (i.e., experience it as part of their body) and thus if this would affect behavior. We investigated in virtual reality how the substitution of the hand with a virtual grafting of an end-effector, either non-naturalistic (a bionic tool) or naturalistic (a hand), impacts embodiment and behavior. Across four experiments, we show that the virtual grafting of a bionic tool elicits a sense of embodiment similar to or even stronger than its natural counterpart. In conclusion, the natural usage of bionic tools can rewire the evolution of human behavior.
RESUMO
About one-third of stroke survivors present unilateral spatial neglect (USN) that negatively impacts the rehabilitation outcome. We reported the study protocol and usability results of an eye-tracking (ET) biofeedback immersive virtual reality (iVR) protocol. Healthy controls and stroke patients with and without USN underwent a single session of the three iVR tasks. The system usability scale (SUS), adverse events (AEs), and ET data were collected and analyzed via parametric analysis. Twelve healthy controls (six young adults and six older adults) and seven patients with a diagnosis of single ischemic stroke (four without USN and three with confirmed diagnosis of USN) completed the usability investigation. SUS results showed good acceptability of the system for healthy controls and stroke patients without USN. ET results showed a lower performance for patients with USN concerning healthy controls and stroke patients without USN, in particular in the exploration of the left visual field. The results showed that the proposed iVR-ET biofeedback protocol is a safe and well-tolerated technique in patients with USN. The real-time feedback can induce a performance response supporting its investigation such as a treatment approach.
RESUMO
Real-world experience is typically multimodal. Evidence indicates that the facilitation in the detection of multisensory stimuli is modulated by the perceptual load, the amount of information involved in the processing of the stimuli. Here, we used a realistic virtual reality environment while concomitantly acquiring Electroencephalography (EEG) and Galvanic Skin Response (GSR) to investigate how multisensory signals impact target detection in two conditions, high and low perceptual load. Different multimodal stimuli (auditory and vibrotactile) were presented, alone or in combination with the visual target. Results showed that only in the high load condition, multisensory stimuli significantly improve performance, compared to visual stimulation alone. Multisensory stimulation also decreases the EEG-based workload. Instead, the perceived workload, according to the "NASA Task Load Index" questionnaire, was reduced only by the trimodal condition (i.e., visual, auditory, tactile). This trimodal stimulation was more effective in enhancing the sense of presence, that is the feeling of being in the virtual environment, compared to the bimodal or unimodal stimulation. Also, we show that in the high load task, the GSR components are higher compared to the low load condition. Finally, the multimodal stimulation (Visual-Audio-Tactile-VAT and Visual-Audio-VA) induced a significant decrease in latency, and a significant increase in the amplitude of the P300 potentials with respect to the unimodal (visual) and visual and tactile bimodal stimulation, suggesting a faster and more effective processing and detection of stimuli if auditory stimulation is included. Overall, these findings provide insights into the relationship between multisensory integration and human behavior and cognition.
RESUMO
We combined virtual reality and multisensory bodily illusion with the aim to characterize and reduce the perceptual (body overestimation) and the cognitive-emotional (body dissatisfaction) components of body image distortion (BID) in anorexia nervosa (AN). For each participant (20 anorexics, 20 healthy controls) we built personalized avatars that reproduced their own body size, shape, and verisimilar increases and losses of their original weight. Body overestimation and dissatisfaction were measured by asking participants to choose the avatar that best resembled their real and ideal body. Results show higher body dissatisfaction in AN, caused by the desire of a thinner body, and no body-size overestimation. Interpersonal multisensory stimulation (IMS) was then applied on the avatar reproducing participant's perceived body, and on the two avatars which reproduced increases and losses of 15% of it, all presented with a first-person perspective (1PP). Embodiment was stronger after synchronous IMS in both groups, but did not reduce BID in participants with AN. Interestingly, anorexics reported more negative emotions after embodying the fattest avatar, which scaled with symptoms severity. Overall, our findings suggest that the cognitive-emotional, more than the perceptual component of BID is severely altered in AN and that perspective (1PP vs. 3PP) from which a body is evaluated may play a crucial role. Future research and clinical trials might take advantage of virtual reality to reduce the emotional distress related to body dissatisfaction.
RESUMO
New solutions in operational environments are often, among objective measurements, evaluated by using subjective assessment and judgment from experts. Anyhow, it has been demonstrated that subjective measures suffer from poor resolution due to a high intra and inter-operator variability. Also, performance measures, if available, could provide just partial information, since an operator could achieve the same performance but experiencing a different workload. In this study, we aimed to demonstrate: (i) the higher resolution of neurophysiological measures in comparison to subjective ones; and (ii) how the simultaneous employment of neurophysiological measures and behavioral ones could allow a holistic assessment of operational tools. In this regard, we tested the effectiveness of an electroencephalography (EEG)-based neurophysiological index (WEEG index) in comparing two different solutions (i.e., Normal and Augmented) in terms of experienced workload. In this regard, 16 professional air traffic controllers (ATCOs) have been asked to perform two operational scenarios. Galvanic Skin Response (GSR) has also been recorded to evaluate the level of arousal (i.e., operator involvement) during the two scenarios execution. NASA-TLX questionnaire has been used to evaluate the perceived workload, and an expert was asked to assess performance achieved by the ATCOs. Finally, reaction times on specific operational events relevant for the assessment of the two solutions, have also been collected. Results highlighted that the Augmented solution induced a local increase in subjects performance (Reaction times). At the same time, this solution induced an increase in the workload experienced by the participants (WEEG). Anyhow, this increase is still acceptable, since it did not negatively impact the performance and has to be intended only as a consequence of the higher engagement of the ATCOs. This behavioral effect is totally in line with physiological results obtained in terms of arousal (GSR), that increased during the scenario with augmentation. Subjective measures (NASA-TLX) did not highlight any significant variation in perceived workload. These results suggest that neurophysiological measure provide additional information than behavioral and subjective ones, even at a level of few seconds, and its employment during the pre-operational activities (e.g., design process) could allow a more holistic and accurate evaluation of new solutions.
RESUMO
This study aims at investigating the possibility to employ neurophysiological measures to assess the humanmachine interaction effectiveness. Such a measure can be used to compare new technologies or solutions, with the final purpose to enhance operator's experience and increase safety. In the present work, two different interaction modalities (Normal and Augmented) related to Air Traffic Management field have been compared, by involving 10 professional air traffic controllers in a control tower simulated environment. Experimental task consisted in locating aircrafts in different airspace positions by using the sense of hearing. In one modality (i.e. "Normal"), all the sound sources (aircrafts) had the same amplification factor. In the "Augmented" modality, the amplification factor of the sound sources located along the participant head sagittal axis was increased, while the intensity of sound sources located outside this axis decreased. In other words, when the user oriented his head toward the aircraft position, the related sound was amplified. Performance data, subjective questionnaires (i.e. NASA-TLX) and neurophysiological measures (i.e. EEG-based) related to the experienced workload have been collected. Results showed higher significant performance achieved by the users during the "Augmented" modality with respect to the "Normal" one, supported by a significant decreasing in experienced workload, evaluated by using EEG-based index. In addition, Performance and EEG-based workload index showed a significant negative correlation. On the contrary, subjective workload analysis did not show any significant trend. This result is a demonstration of the higher effectiveness of neurophysiological measures with respect to subjective ones for Human-Computer Interaction assessment.