Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Sensors (Basel) ; 20(18)2020 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-32927722

RESUMO

Emotions play a critical role in our daily lives, so the understanding and recognition of emotional responses is crucial for human research. Affective computing research has mostly used non-immersive two-dimensional (2D) images or videos to elicit emotional states. However, immersive virtual reality, which allows researchers to simulate environments in controlled laboratory conditions with high levels of sense of presence and interactivity, is becoming more popular in emotion research. Moreover, its synergy with implicit measurements and machine-learning techniques has the potential to impact transversely in many research areas, opening new opportunities for the scientific community. This paper presents a systematic review of the emotion recognition research undertaken with physiological and behavioural measures using head-mounted displays as elicitation devices. The results highlight the evolution of the field, give a clear perspective using aggregated analysis, reveal the current open issues and provide guidelines for future research.


Assuntos
Emoções , Aprendizado de Máquina , Realidade Virtual , Humanos
2.
Sensors (Basel) ; 20(17)2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-32883026

RESUMO

Fixation identification is an essential task in the extraction of relevant information from gaze patterns; various algorithms are used in the identification process. However, the thresholds used in the algorithms greatly affect their sensitivity. Moreover, the application of these algorithm to eye-tracking technologies integrated into head-mounted displays, where the subject's head position is unrestricted, is still an open issue. Therefore, the adaptation of eye-tracking algorithms and their thresholds to immersive virtual reality frameworks needs to be validated. This study presents the development of a dispersion-threshold identification algorithm applied to data obtained from an eye-tracking system integrated into a head-mounted display. Rules-based criteria are proposed to calibrate the thresholds of the algorithm through different features, such as number of fixations and the percentage of points which belong to a fixation. The results show that distance-dispersion thresholds between 1-1.6° and time windows between 0.25-0.4 s are the acceptable range parameters, with 1° and 0.25 s being the optimum. The work presents a calibrated algorithm to be applied in future experiments with eye-tracking integrated into head-mounted displays and guidelines for calibrating fixation identification algorithms.


Assuntos
Tecnologia de Rastreamento Ocular , Realidade Virtual , Algoritmos , Calibragem
3.
Comput Biol Med ; 171: 108194, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38428095

RESUMO

Clinical assessment procedures encounter challenges in terms of objectivity because they rely on subjective data. Computational psychiatry proposes overcoming this limitation by introducing biosignal-based assessments able to detect clinical biomarkers, while virtual reality (VR) can offer ecological settings for measurement. Autism spectrum disorder (ASD) is a neurodevelopmental disorder where many biosignals have been tested to improve assessment procedures. However, in ASD research there is a lack of studies systematically comparing biosignals for the automatic classification of ASD when recorded simultaneously in ecological settings, and comparisons among previous studies are challenging due to methodological inconsistencies. In this study, we examined a VR screening tool consisting of four virtual scenes, and we compared machine learning models based on implicit (motor skills and eye movements) and explicit (behavioral responses) biosignals. Machine learning models were developed for each biosignal within the virtual scenes and then combined into a final model per biosignal. A linear support vector classifier with recursive feature elimination was used and tested using nested cross-validation. The final model based on motor skills exhibited the highest robustness in identifying ASD, achieving an AUC of 0.89 (SD = 0.08). The best behavioral model showed an AUC of 0.80, while further research is needed for the eye-movement models due to limitations with the eye-tracking glasses. These findings highlight the potential of motor skills in enhancing objectivity and reliability in the early assessment of ASD compared to other biosignals.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Realidade Virtual , Humanos , Transtorno Autístico/diagnóstico , Transtorno do Espectro Autista/diagnóstico , Reprodutibilidade dos Testes , Aprendizado de Máquina
4.
Medicina (B Aires) ; 84 Suppl 1: 57-64, 2024 Mar.
Artigo em Espanhol | MEDLINE | ID: mdl-38350626

RESUMO

INTRODUCTION: Autism Spectrum Disorder (ASD) is a neurodevelopmental condition which traditional assessment procedures encounter certain limitations. The current ASD research field is exploring and endorsing innovative methods to assess the disorder early on, based on the automatic detection of biomarkers. However, many of these procedures lack ecological validity in their measurements. In this context, virtual reality (VR) shows promise for objectively recording biosignals while users experience ecological situations. METHODS: This study outlines a novel and playful VR procedure for the early assessment of ASD, relying on multimodal biosignal recording. During a VR experience featuring 12 virtual scenes, eye gaze, motor skills, electrodermal activity and behavioural performance were measured in 39 children with ASD and 42 control peers. Machine learning models were developed to identify digital biomarkers and classify autism. RESULTS: Biosignals reported varied performance in detecting ASD, while the combined model resulting from the combination of specific-biosignal models demonstrated the ability to identify ASD with an accuracy of 83% (SD = 3%) and an AUC of 0.91 (SD = 0.04). DISCUSSION: This screening tool may support ASD diagnosis by reinforcing the outcomes of traditional assessment procedures.


Introducción: El Trastorno del Espectro Autista (TEA) es un trastorno del neurodesarrollo, y sus procedimientos tradicionales de evaluación encuentran ciertas limitaciones. El actual campo de investigación sobre TEA está explorando y respaldando métodos innovadores para evaluar el trastorno tempranamente, basándose en la detección automática de biomarcadores. Sin embargo, muchos de estos procedimientos carecen de validez ecológica en sus mediciones. En este contexto, la realidad virtual (RV) presenta un prometedor potencial para registrar objetivamente bioseñales mientras los usuarios experimentan situaciones ecológicas. Métodos: Este estudio describe un novedoso y lúdico procedimiento de RV para la evaluación temprana del TEA, basado en la grabación multimodal de bioseñales. Durante una experiencia de RV con 12 escenas virtuales, se midieron la mirada, las habilidades motoras, la actividad electrodermal y el rendimiento conductual en 39 niños con TEA y 42 compañeros de control. Se desarrollaron modelos de aprendizaje automático para identificar biomarcadores digitales y clasificar el autismo. Resultados: Las bioseñales reportaron un rendimiento variado en la detección del TEA, mientras que el modelo resultante de la combinación de los modelos de las bioseñales demostró la capacidad de identificar el TEA con una precisión del 83% (DE = 3%) y un AUC de 0.91 (DE = 0.04). Discusión: Esta herramienta de detección puede respaldar el diagnóstico del TEA al reforzar los resultados de los procedimientos tradicionales de evaluación.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Transtornos do Neurodesenvolvimento , Realidade Virtual , Criança , Humanos , Transtorno do Espectro Autista/diagnóstico , Biomarcadores
5.
IEEE J Biomed Health Inform ; 27(11): 5576-5587, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37566508

RESUMO

Attachment styles are known to have significant associations with mental and physical health. Specifically, insecure attachment leads individuals to higher risk of suffering from mental disorders and chronic diseases. The aim of this study is to develop an attachment recognition model that can distinguish between secure and insecure attachment styles from voice recordings, exploring the importance of acoustic features while also evaluating gender differences. A total of 199 participants recorded their responses to four open questions intended to trigger their attachment system using a web-based interrogation system. The recordings were processed to obtain the standard acoustic feature set eGeMAPS, and recursive feature elimination was applied to select the relevant features. Different supervised machine learning models were trained to recognize attachment styles using both gender-dependent and gender-independent approaches. The gender-independent model achieved a test accuracy of 58.88%, whereas the gender-dependent models obtained 63.88% and 83.63% test accuracy for women and men respectively, indicating a strong influence of gender on attachment style recognition and the need to consider them separately in further studies. These results also demonstrate the potential of acoustic properties for remote assessment of attachment style, enabling fast and objective identification of this health risk factor, and thus supporting the implementation of large-scale mobile screening systems.


Assuntos
Transtornos Mentais , Masculino , Humanos , Feminino , Doença Crônica , Aprendizado de Máquina
6.
Front Psychol ; 14: 1140731, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37089733

RESUMO

Many symptoms of the autism spectrum disorder (ASD) are evident in early infancy, but ASD is usually diagnosed much later by procedures lacking objective measurements. It is necessary to anticipate the identification of ASD by improving the objectivity of the procedure and the use of ecological settings. In this context, atypical motor skills are reaching consensus as a promising ASD biomarker, regardless of the level of symptom severity. This study aimed to assess differences in the whole-body motor skills between 20 children with ASD and 20 children with typical development during the execution of three tasks resembling regular activities presented in virtual reality. The virtual tasks asked to perform precise and goal-directed actions with different limbs vary in their degree of freedom of movement. Parametric and non-parametric statistical methods were applied to analyze differences in children's motor skills. The findings endorsed the hypothesis that when particular goal-directed movements are required, the type of action could modulate the presence of motor abnormalities in ASD. In particular, the ASD motor abnormalities emerged in the task requiring to take with the upper limbs goal-directed actions with low degree of freedom. The motor abnormalities covered (1) the body part mainly involved in the action, and (2) further body parts not directly involved in the movement. Findings were discussed against the background of atypical prospective control of movements and visuomotor discoordination in ASD. These findings contribute to advance the understanding of motor skills in ASD while deepening ecological and objective assessment procedures based on VR.

7.
Front Psychol ; 13: 752073, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35360568

RESUMO

Virtual reality (VR) is a useful tool to study consumer behavior while they are immersed in a realistic scenario. Among several other factors, personality traits have been shown to have a substantial influence on purchasing behavior. The primary objective of this study was to classify consumers based on the Big Five personality domains using their behavior while performing different tasks in a virtual shop. The personality recognition was ascertained using behavioral measures received from VR hardware, including eye-tracking, navigation, posture and interaction. Responses from 60 participants were collected while performing free and directed search tasks in a virtual hypermarket. A set of behavioral features was processed, and the personality domains were recognized using a statistical supervised machine learning classifier algorithm via a support vector machine. The results suggest that the open-mindedness personality type can be classified using eye gaze patterns, while extraversion is related to posture and interactions. However, a combination of signals must be exhibited to detect conscientiousness and negative emotionality. The combination of all measures and tasks provides better classification accuracy for all personality domains. The study indicates that a consumer's personality can be recognized using the behavioral sensors included in commercial VR devices during a purchase in a virtual retail store.

8.
Front Psychol ; 13: 864266, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35712148

RESUMO

The aim of this study was to evaluate the viability of a new selection procedure based on machine learning (ML) and virtual reality (VR). Specifically, decision-making behaviours and eye-gaze patterns were used to classify individuals based on their leadership styles while immersed in virtual environments that represented social workplace situations. The virtual environments were designed using an evidence-centred design approach. Interaction and gaze patterns were recorded in 83 subjects, who were classified as having either high or low leadership style, which was assessed using the Multifactor leadership questionnaire. A ML model that combined behaviour outputs and eye-gaze patterns was developed to predict subjects' leadership styles (high vs low). The results indicated that the different styles could be differentiated by eye-gaze patterns and behaviours carried out during immersive VR. Eye-tracking measures contributed more significantly to this differentiation than behavioural metrics. Although the results should be taken with caution as the small sample does not allow generalization of the data, this study illustrates the potential for a future research roadmap that combines VR, implicit measures, and ML for personnel selection.

9.
Autism Res ; 15(1): 131-145, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34811930

RESUMO

The core symptoms of autism spectrum disorder (ASD) mainly relate to social communication and interactions. ASD assessment involves expert observations in neutral settings, which introduces limitations and biases related to lack of objectivity and does not capture performance in real-world settings. To overcome these limitations, advances in technologies (e.g., virtual reality) and sensors (e.g., eye-tracking tools) have been used to create realistic simulated environments and track eye movements, enriching assessments with more objective data than can be obtained via traditional measures. This study aimed to distinguish between autistic and typically developing children using visual attention behaviors through an eye-tracking paradigm in a virtual environment as a measure of attunement to and extraction of socially relevant information. The 55 children participated. Autistic children presented a higher number of frames, both overall and per scenario, and showed higher visual preferences for adults over children, as well as specific preferences for adults' rather than children's faces on which looked more at bodies. A set of multivariate supervised machine learning models were developed using recursive feature selection to recognize ASD based on extracted eye gaze features. The models achieved up to 86% accuracy (sensitivity = 91%) in recognizing autistic children. Our results should be taken as preliminary due to the relatively small sample size and the lack of an external replication dataset. However, to our knowledge, this constitutes a first proof of concept in the combined use of virtual reality, eye-tracking tools, and machine learning for ASD recognition. LAY SUMMARY: Core symptoms in children with ASD involve social communication and interaction. ASD assessment includes expert observations in neutral settings, which show limitations and biases related to lack of objectivity and do not capture performance in real settings. To overcome these limitations, this work aimed to distinguish between autistic and typically developing children in visual attention behaviors through an eye-tracking paradigm in a virtual environment as a measure of attunement to, and extraction of, socially relevant information.


Assuntos
Transtorno do Espectro Autista , Realidade Virtual , Adulto , Transtorno do Espectro Autista/diagnóstico , Biomarcadores , Criança , Fixação Ocular , Humanos , Aprendizado de Máquina
10.
Front Psychol ; 13: 993162, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36420385

RESUMO

This study aimed to evaluate the viability of a new procedure based on machine learning (ML), virtual reality (VR), and implicit measures to discriminate empathy. Specifically, eye-tracking and decision-making patterns were used to classify individuals according to their level in each of the empathy dimensions, while they were immersed in virtual environments that represented social workplace situations. The virtual environments were designed using an evidence-centered design approach. Interaction and gaze patterns were recorded for 82 participants, who were classified as having high or low empathy on each of the following empathy dimensions: perspective-taking, emotional understanding, empathetic stress, and empathetic joy. The dimensions were assessed using the Cognitive and Affective Empathy Test. An ML-based model that combined behavioral outputs and eye-gaze patterns was developed to predict the empathy dimension level of the participants (high or low). The analysis indicated that the different dimensions could be differentiated by eye-gaze patterns and behaviors during immersive VR. The eye-tracking measures contributed more significantly to this differentiation than did the behavioral metrics. In summary, this study illustrates the potential of a novel VR organizational environment coupled with ML to discriminate the empathy dimensions. However, the results should be interpreted with caution, as the small sample does not allow general conclusions to be drawn. Further studies with a larger sample are required to support the results obtained in this study.

11.
PLoS One ; 16(7): e0254098, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34197553

RESUMO

Many affective computing studies have developed automatic emotion recognition models, mostly using emotional images, audio and videos. In recent years, virtual reality (VR) has been also used as a method to elicit emotions in laboratory environments. However, there is still a need to analyse the validity of VR in order to extrapolate the results it produces and to assess the similarities and differences in physiological responses provoked by real and virtual environments. We investigated the cardiovascular oscillations of 60 participants during a free exploration of a real museum and its virtualisation viewed through a head-mounted display. The differences between the heart rate variability features in the high and low arousal stimuli conditions were analysed through statistical hypothesis testing; and automatic arousal recognition models were developed across the real and the virtual conditions using a support vector machine algorithm with recursive feature selection. The subjects' self-assessments suggested that both museums elicited low and high arousal levels. In addition, the real museum showed differences in terms of cardiovascular responses, differences in vagal activity, while arousal recognition reached 72.92% accuracy. However, we did not find the same arousal-based autonomic nervous system change pattern during the virtual museum exploration. The results showed that, while the direct virtualisation of a real environment might be self-reported as evoking psychological arousal, it does not necessarily evoke the same cardiovascular changes as a real arousing elicitation. These contribute to the understanding of the use of VR in emotion recognition research; future research is needed to study arousal and emotion elicitation in immersive VR.


Assuntos
Emoções/fisiologia , Medo/fisiologia , Frequência Cardíaca/fisiologia , Realidade Virtual , Adulto , Algoritmos , Nível de Alerta/fisiologia , Medo/psicologia , Feminino , Humanos , Masculino , Máquina de Vetores de Suporte , Adulto Jovem
12.
Front Psychol ; 12: 612717, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33679528

RESUMO

This study compares cognitive and emotional responses to 360-degree vs. static (2D) videos in terms of visual attention, brand recognition, engagement of the prefrontal cortex, and emotions. Hypotheses are proposed based on the interactivity literature, cognitive overload, advertising response model and motivation, opportunity, and ability theoretical frameworks, and tested using neurophysiological tools: electroencephalography, eye-tracking, electrodermal activity, and facial coding. The results revealed that gaze view depends on ad content, visual attention paid being lower in 360-degree FMCG ads than in 2D ads. Brand logo recognition is lower in 360-degree ads than in 2D video ads. Overall, 360-degree ads for durable products increase positive emotions, which carries the risk of non-exposure to some of the ad content. In testing four ads for durable goods and fast-moving consumer goods (FMCG) this research explains the mechanism through which 360-degree video ads outperform standard versions.

13.
Front Psychol ; 12: 562381, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33762988

RESUMO

Risk taking (RT) is a component of the decision-making process in situations that involve uncertainty and in which the probability of each outcome - rewards and/or negative consequences - is already known. The influence of cognitive and emotional processes in decision making may affect how risky situations are addressed. First, inaccurate assessments of situations may constitute a perceptual bias in decision making, which might influence RT. Second, there seems to be consensus that a proneness bias exists, known as risk proneness, which can be defined as the propensity to be attracted to potentially risky activities. In the present study, we take the approach that risk perception and risk proneness affect RT behaviours. The study hypothesises that locus of control, emotion regulation, and executive control act as perceptual biases in RT, and that personality, sensation seeking, and impulsivity traits act as proneness biases in RT. The results suggest that locus of control, emotion regulation and executive control influence certain domains of RT, while personality influences in all domains except the recreational, and sensation seeking and impulsivity are involved in all domains of RT. The results of the study constitute a foundation upon which to build in this research area and can contribute to the increased understanding of human behaviour in risky situations.

14.
Front Psychol ; 11: 570470, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33071901

RESUMO

The use of visual attention for evaluating consumer behavior has become a relevant field in recent years, allowing researchers to understand the decision-making processes beyond classical self-reports. In our research, we focused on using eye-tracking as a method to understand consumer preferences in children. Twenty-eight subjects with ages between 7 and 12 years participated in the experiment. Participants were involved in two consecutive phases. The initial phase consisted of the visualization of a set of stimuli for decision-making in an eight-position layout called Alternative Forced-choice. Then the subjects were asked to freely analyze the set of stimuli, they needed to choose the best in terms of preference. The sample was randomly divided into two groups balanced by gender. One group visualized a set of icons and the other a set of toys. The final phase was an independent assessment of each stimulus viewed in the initial phase in terms of liking/disliking using a 7-point Likert scale. Sixty-four stimuli were designed for each of the groups. The visual attention was measured using a non-obstructive eye-tracking device. The results revealed two novel insights. Firstly, the time of fixation during the last four visits to each stimulus before the decision-making instant allows us to recognize the icon or toy chosen from the eight alternatives with a 71.2 and 67.2% of accuracy, respectively. The result supports the use of visual attention measurements as an implicit tool to analyze decision-making and preferences in children. Secondly, eye movement and the choice of liking/disliking choice are influenced by stimuli design dimensions. The icon observation results revealed how gender samples have different fixation and different visit times which depend on stimuli design dimension. The toy observations results revealed how the materials determinate the largest amount fixations, also, the visit times were differentiated by gender. This research presents a relevant empirical data to understand the decision-making phenomenon by analyzing eye movement behavior. The presented method can be applied to recognize the choice likelihood between several alternatives. Finally, children's opinions represent an extra difficulty judgment to be determined, and the eye-tracking technique seen as an implicit measure to tackle it.

15.
J Clin Med ; 9(5)2020 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-32357517

RESUMO

Autism spectrum disorder (ASD) is mostly diagnosed according to behavioral symptoms in sensory, social, and motor domains. Improper motor functioning, during diagnosis, involves the qualitative evaluation of stereotyped and repetitive behaviors, while quantitative methods that classify body movements' frequencies of children with ASD are less addressed. Recent advances in neuroscience, technology, and data analysis techniques are improving the quantitative and ecological validity methods to measure specific functioning in ASD children. On one side, cutting-edge technologies, such as cameras, sensors, and virtual reality can accurately detect and classify behavioral biomarkers, as body movements in real-life simulations. On the other, machine-learning techniques are showing the potential for identifying and classifying patients' subgroups. Starting from these premises, three real-simulated imitation tasks have been implemented in a virtual reality system whose aim is to investigate if machine-learning methods on movement features and frequency could be useful in discriminating ASD children from children with typical neurodevelopment. In this experiment, 24 children with ASD and 25 children with typical neurodevelopment participated in a multimodal virtual reality experience, and changes in their body movements were tracked by a depth sensor camera during the presentation of visual, auditive, and olfactive stimuli. The main results showed that ASD children presented larger body movements than TD children, and that head, trunk, and feet represent the maximum classification with an accuracy of 82.98%. Regarding stimuli, visual condition showed the highest accuracy (89.36%), followed by the visual-auditive stimuli (74.47%), and visual-auditive-olfactory stimuli (70.21%). Finally, the head showed the most consistent performance along with the stimuli, from 80.85% in visual to 89.36% in visual-auditive-olfactory condition. The findings showed the feasibility of applying machine learning and virtual reality to identify body movements' biomarkers that could contribute to improving ASD diagnosis.

16.
Front Hum Neurosci ; 14: 90, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32317949

RESUMO

OBJECTIVE: Sensory processing is the ability to capture, elaborate, and integrate information through the five senses and is impaired in over 90% of children with autism spectrum disorder (ASD). The ASD population shows hyper-hypo sensitiveness to sensory stimuli that can generate alteration in information processing, affecting cognitive and social responses to daily life situations. Structured and semi-structured interviews are generally used for ASD assessment, and the evaluation relies on the examiner's subjectivity and expertise, which can lead to misleading outcomes. Recently, there has been a growing need for more objective, reliable, and valid diagnostic measures, such as biomarkers, to distinguish typical from atypical functioning and to reliably track the progression of the illness, helping to diagnose ASD. Implicit measures and ecological valid settings have been showing high accuracy on predicting outcomes and correctly classifying populations in categories. METHODS: Two experiments investigated whether sensory processing can discriminate between ASD and typical development (TD) populations using electrodermal activity (EDA) in two multimodal virtual environments (VE): forest VE and city VE. In the first experiment, 24 children with ASD diagnosis and 30 TDs participated in both virtual experiences, and changes in EDA have been recorded before and during the presentation of visual, auditive, and olfactive stimuli. In the second experiment, 40 children have been added to test the model of experiment 1. RESULTS: The first exploratory results on EDA comparison models showed that the integration of visual, auditive, and olfactive stimuli in the forest environment provided higher accuracy (90.3%) on sensory dysfunction discrimination than specific stimuli. In the second experiment, 92 subjects experienced the forest VE, and results on 72 subjects showed that stimuli integration achieved an accuracy of 83.33%. The final confirmatory test set (n = 20) achieved 85% accuracy, simulating a real application of the models. Further relevant result concerns the visual stimuli condition in the first experiment, which achieved 84.6% of accuracy in recognizing ASD sensory dysfunction. CONCLUSION: According to our studies' results, implicit measures, such as EDA, and ecological valid settings can represent valid quantitative methods, along with traditional assessment measures, to classify ASD population, enhancing knowledge on the development of relevant specific treatments.

17.
Medicina (B.Aires) ; 84(supl.1): 57-64, mayo 2024. graf
Artigo em Espanhol | LILACS-Express | LILACS | ID: biblio-1558485

RESUMO

Resumen Introducción : El Trastorno del Espectro Autista (TEA) es un trastorno del neurodesarrollo, y sus procedimien tos tradicionales de evaluación encuentran ciertas li mitaciones. El actual campo de investigación sobre TEA está explorando y respaldando métodos innovadores para evaluar el trastorno tempranamente, basándose en la detección automática de biomarcadores. Sin embargo, muchos de estos procedimientos carecen de validez ecológica en sus mediciones. En este contexto, la reali dad virtual (RV) presenta un prometedor potencial para registrar objetivamente bioseñales mientras los usuarios experimentan situaciones ecológicas. Métodos : Este estudio describe un novedoso y lúdi co procedimiento de RV para la evaluación temprana del TEA, basado en la grabación multimodal de bio señales. Durante una experiencia de RV con 12 esce nas virtuales, se midieron la mirada, las habilidades motoras, la actividad electrodermal y el rendimiento conductual en 39 niños con TEA y 42 compañeros de control. Se desarrollaron modelos de aprendizaje automático para identificar biomarcadores digitales y clasificar el autismo. Resultados : Las bioseñales reportaron un rendimien to variado en la detección del TEA, mientras que el modelo resultante de la combinación de los modelos de las bioseñales demostró la capacidad de identificar el TEA con una precisión del 83% (DE = 3%) y un AUC de 0.91 (DE = 0.04). Discusión : Esta herramienta de detección pue de respaldar el diagnóstico del TEA al reforzar los resultados de los procedimientos tradicionales de evaluación.


Abstract Introduction : Autism Spectrum Disorder (ASD) is a neurodevelopmental condition which traditional as sessment procedures encounter certain limitations. The current ASD research field is exploring and endorsing innovative methods to assess the disorder early on, based on the automatic detection of biomarkers. How ever, many of these procedures lack ecological validity in their measurements. In this context, virtual reality (VR) shows promise for objectively recording biosignals while users experience ecological situations. Methods : This study outlines a novel and playful VR procedure for the early assessment of ASD, relying on multimodal biosignal recording. During a VR experience featuring 12 virtual scenes, eye gaze, motor skills, elec trodermal activity and behavioural performance were measured in 39 children with ASD and 42 control peers. Machine learning models were developed to identify digital biomarkers and classify autism. Results : Biosignals reported varied performance in detecting ASD, while the combined model resulting from the combination of specific-biosignal models demon strated the ability to identify ASD with an accuracy of 83% (SD = 3%) and an AUC of 0.91 (SD = 0.04). Discussion : This screening tool may support ASD diagnosis by reinforcing the outcomes of traditional assessment procedures.

18.
PLoS One ; 14(10): e0223881, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31613927

RESUMO

Virtual reality is a powerful tool in human behaviour research. However, few studies compare its capacity to evoke the same emotional responses as in real scenarios. This study investigates psycho-physiological patterns evoked during the free exploration of an art museum and the museum virtualized through a 3D immersive virtual environment (IVE). An exploratory study involving 60 participants was performed, recording electroencephalographic and electrocardiographic signals using wearable devices. The real vs. virtual psychological comparison was performed using self-assessment emotional response tests, whereas the physiological comparison was performed through Support Vector Machine algorithms, endowed with an effective feature selection procedure for a set of state-of-the-art metrics quantifying cardiovascular and brain linear and nonlinear dynamics. We included an initial calibration phase, using standardized 2D and 360° emotional stimuli, to increase the accuracy of the model. The self-assessments of the physical and virtual museum support the use of IVEs in emotion research. The 2-class (high/low) system accuracy was 71.52% and 77.08% along the arousal and valence dimension, respectively, in the physical museum, and 75.00% and 71.08% in the virtual museum. The previously presented 360° stimuli contributed to increasing the accuracy in the virtual museum. Also, the real vs. virtual classifier accuracy was 95.27%, using only EEG mean phase coherency features, which demonstrates the high involvement of brain synchronization in emotional virtual reality processes. These findings provide an important contribution at a methodological level and to scientific knowledge, which will effectively guide future emotion elicitation and recognition systems using virtual reality.


Assuntos
Eletrocardiografia/instrumentação , Eletroencefalografia/instrumentação , Emoções/fisiologia , Adulto , Algoritmos , Encéfalo/fisiologia , Feminino , Coração/fisiologia , Humanos , Masculino , Museus , Dinâmica não Linear , Máquina de Vetores de Suporte , Realidade Virtual , Dispositivos Eletrônicos Vestíveis , Adulto Jovem
19.
Sci Rep ; 8(1): 13657, 2018 09 12.
Artigo em Inglês | MEDLINE | ID: mdl-30209261

RESUMO

Affective Computing has emerged as an important field of study that aims to develop systems that can automatically recognize emotions. Up to the present, elicitation has been carried out with non-immersive stimuli. This study, on the other hand, aims to develop an emotion recognition system for affective states evoked through Immersive Virtual Environments. Four alternative virtual rooms were designed to elicit four possible arousal-valence combinations, as described in each quadrant of the Circumplex Model of Affects. An experiment involving the recording of the electroencephalography (EEG) and electrocardiography (ECG) of sixty participants was carried out. A set of features was extracted from these signals using various state-of-the-art metrics that quantify brain and cardiovascular linear and nonlinear dynamics, which were input into a Support Vector Machine classifier to predict the subject's arousal and valence perception. The model's accuracy was 75.00% along the arousal dimension and 71.21% along the valence dimension. Our findings validate the use of Immersive Virtual Environments to elicit and automatically recognize different emotional states from neural and cardiac dynamics; this development could have novel applications in fields as diverse as Architecture, Health, Education and Videogames.


Assuntos
Nível de Alerta/fisiologia , Encéfalo/fisiologia , Emoções/fisiologia , Frequência Cardíaca/fisiologia , Dispositivos Eletrônicos Vestíveis , Adulto , Algoritmos , Eletrocardiografia/métodos , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Máquina de Vetores de Suporte , Inquéritos e Questionários , Realidade Virtual , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA