RESUMEN
Current models on Explainable Artificial Intelligence (XAI) have shown a lack of reliability when evaluating feature-relevance for deep neural biomarker classifiers. The inclusion of reliable saliency-maps for obtaining trustworthy and interpretable neural activity is still insufficiently mature for practical applications. These limitations impede the development of clinical applications of Deep Learning. To address, these limitations we propose the RemOve-And-Retrain (ROAR) algorithm which supports the recovery of highly relevant features from any pre-trained deep neural network. In this study we evaluated the ROAR methodology and algorithm for the Face Emotion Recognition (FER) task, which is clinically applicable in the study of Autism Spectrum Disorder (ASD). We trained a Convolutional Neural Network (CNN) from electroencephalography (EEG) signals and assessed the relevance of FER-elicited EEG features from individuals diagnosed with and without ASD. Specifically, we compared the ROAR reliability from well-known relevance maps such as Layer-Wise Relevance Propagation, PatternNet, Pattern-Attribution, and Smooth-Grad Squared. This study is the first to bridge previous neuroscience and ASD research findings to feature-relevance calculation for EEG-based emotion recognition with CNN in typically-development (TD) and in ASD individuals.
Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Aprendizaje Profundo , Humanos , Trastorno Autístico/diagnóstico , Trastorno del Espectro Autista/diagnóstico , Inteligencia Artificial , Reproducibilidad de los Resultados , Algoritmos , Emociones , ElectroencefalografíaRESUMEN
BACKGROUND: While mental health applications are increasingly becoming available for large populations of users, there is a lack of controlled trials on the impacts of such applications. Artificial intelligence (AI)-empowered agents have been evaluated when assisting adults with cognitive impairments; however, few applications are available for aging adults who are still actively working. These adults often have high stress levels related to changes in their work places, and related symptoms eventually affect their quality of life. OBJECTIVE: We aimed to evaluate the contribution of TEO (Therapy Empowerment Opportunity), a mobile personal health care agent with conversational AI. TEO promotes mental health and well-being by engaging patients in conversations to recollect the details of events that increased their anxiety and by providing therapeutic exercises and suggestions. METHODS: The study was based on a protocolized intervention for stress and anxiety management. Participants with stress symptoms and mild-to-moderate anxiety received an 8-week cognitive behavioral therapy (CBT) intervention delivered remotely. A group of participants also interacted with the agent TEO. The participants were active workers aged over 55 years. The experimental groups were as follows: group 1, traditional therapy; group 2, traditional therapy and mobile health (mHealth) agent; group 3, mHealth agent; and group 4, no treatment (assigned to a waiting list). Symptoms related to stress (anxiety, physical disease, and depression) were assessed prior to treatment (T1), at the end (T2), and 3 months after treatment (T3), using standardized psychological questionnaires. Moreover, the Patient Health Questionnaire-8 and General Anxiety Disorders-7 scales were administered before the intervention (T1), at mid-term (T2), at the end of the intervention (T3), and after 3 months (T4). At the end of the intervention, participants in groups 1, 2, and 3 filled in a satisfaction questionnaire. RESULTS: Despite randomization, statistically significant differences between groups were present at T1. Group 4 showed lower levels of anxiety and depression compared with group 1, and lower levels of stress compared with group 2. Comparisons between groups at T2 and T3 did not show significant differences in outcomes. Analyses conducted within groups showed significant differences between times in group 2, with greater improvements in the levels of stress and scores related to overall well-being. A general worsening trend between T2 and T3 was detected in all groups, with a significant increase in stress levels in group 2. Group 2 reported higher levels of perceived usefulness and satisfaction. CONCLUSIONS: No statistically significant differences could be observed between participants who used the mHealth app alone or within the traditional CBT setting. However, the results indicated significant differences within the groups that received treatment and a stable tendency toward improvement, which was limited to individual perceptions of stress-related symptoms. TRIAL REGISTRATION: ClinicalTrials.gov NCT04809090; https://clinicaltrials.gov/ct2/show/NCT04809090.
RESUMEN
BACKGROUND: Individuals with autism spectrum disorder (ASD) exhibit frequent behavioral deficits in facial emotion recognition (FER). It remains unknown whether these deficits arise because facial emotion information is not encoded in their neural signal or because it is encodes but fails to translate to FER behavior (deployment). This distinction has functional implications, including constraining when differences in social information processing occur in ASD, and guiding interventions (i.e., developing prosthetic FER vs. reinforcing existing skills). METHODS: We utilized a discriminative and contemporary machine learning approach-deep convolutional neural networks-to classify facial emotions viewed by individuals with and without ASD (N = 88) from concurrently recorded electroencephalography signals. RESULTS: The convolutional neural network classified facial emotions with high accuracy for both ASD and non-ASD groups, even though individuals with ASD performed more poorly on the concurrent FER task. In fact, convolutional neural network accuracy was greater in the ASD group and was not related to behavioral performance. This pattern of results replicated across three independent participant samples. Moreover, feature importance analyses suggested that a late temporal window of neural activity (1000-1500 ms) may be uniquely important in facial emotion classification for individuals with ASD. CONCLUSIONS: Our results reveal for the first time that facial emotion information is encoded in the neural signal of individuals with (and without) ASD. Thus, observed difficulties in behavioral FER associated with ASD likely arise from difficulties in decoding or deployment of facial emotion information within the neural signal. Interventions should focus on capitalizing on this intact encoding rather than promoting compensation or FER prostheses.
Asunto(s)
Trastorno del Espectro Autista , Aprendizaje Profundo , Reconocimiento Facial , Emociones , Expresión Facial , HumanosRESUMEN
BACKGROUND: Mobile apps for mental health are available on the market. Although they seem to be promising for improving the accessibility of mental health care, little is known about their acceptance, design methodology, evaluation, and integration into psychotherapy protocols. This makes it difficult for health care professionals to judge whether these apps may help them and their patients. OBJECTIVE: Our aim is to describe and evaluate a protocol for the participatory design of mobile apps for mental health. In this study, participants and psychotherapists are engaged in the early phases of the design and development of the app empowered by conversational artificial intelligence (AI). The app supports interventions for stress management training based on cognitive behavioral theory. METHODS: A total of 21 participants aged 33-61 years with mild to moderate levels of stress, anxiety, and depression (assessed by administering the Italian versions of the Symptom Checklist-90-Revised, Occupational Stress Indicator, and Perceived Stress Scale) were assigned randomly to 2 groups, A and B. Both groups received stress management training sessions along with cognitive behavioral treatment, but only participants assigned to group A received support through a mobile personal health care agent, designed for mental care and empowered by AI techniques. Psychopathological outcomes were assessed at baseline (T1), after 8 weeks of treatment (T2), and 3 months after treatment (T3). Focus groups with psychotherapists who administered the therapy were held after treatment to collect their impressions and suggestions. RESULTS: Although the intergroup statistical analysis showed that group B participants could rely on better coping strategies, group A participants reported significant improvements in obsessivity and compulsivity and positive distress symptom assessment. The psychotherapists' acceptance of the protocol was good. In particular, they were in favor of integrating an AI-based mental health app into their practice because they could appreciate the increased engagement of patients in pursuing their therapy goals. CONCLUSIONS: The integration into practice of an AI-based mobile app for mental health was shown to be acceptable to both mental health professionals and users. Although it was not possible in this experiment to show that the integration of AI-based conversational technologies into traditional remote psychotherapy significantly decreased the participants' levels of stress and anxiety, the experimental results showed significant trends of reduction of symptoms in group A and their persistence over time. The mental health professionals involved in the experiment reported interest in, and acceptance of, the proposed technology as a promising tool to be included in a blended model of psychotherapy.
RESUMEN
Machine learning methods, such as deep learning, show promising results in the medical domain. However, the lack of interpretability of these algorithms may hinder their applicability to medical decision support systems. This paper studies an interpretable deep learning technique, called SincNet. SincNet is a convolutional neural network that efficiently learns customized band-pass filters through trainable sinc-functions. In this study, we use SincNet to analyze the neural activity of individuals with Autism Spectrum Disorder (ASD), who experience characteristic differences in neural oscillatory activity. In particular, we propose a novel SincNet-based neural network for detecting emotions in ASD patients using EEG signals. The learned filters can be easily inspected to detect which part of the EEG spectrum is used for predicting emotions. We found that our system automatically learns the high-α (9-13 Hz) and ß (13-30 Hz) band suppression often present in individuals with ASD. This result is consistent with recent neuroscience studies on emotion recognition, which found an association between these band suppressions and the behavioral deficits observed in individuals with ASD. The improved interpretability of SincNet is achieved without sacrificing performance in emotion recognition.
Asunto(s)
Trastorno del Espectro Autista , Aprendizaje Profundo , Encéfalo , Electroencefalografía , Emociones , HumanosRESUMEN
Error-related potentials are considered an important neuro-correlate for monitoring human intentionality in decision-making, human-human, or human-machine interaction scenarios. Multiple methods have been proposed in order to improve the recognition of human intentions. Moreover, current brain-computer interfaces are limited in the identification of human errors by manual tuning of parameters (e.g., feature/channel selection), thus selecting fronto-central channels as discriminative features within-subject. In this paper, we propose the inclusion of error-related potential activity as a generalized two-dimensional feature set and a Convolutional Neural Network for classification of EEG-based human error detection. We evaluate this pipeline using the BNCI2020 - Monitoring Error-Related Potential dataset obtaining a maximum error detection accuracy of 79.8% in a within-session 10-fold cross-validation modality, and outperforming current state of the art.
Asunto(s)
Interfaces Cerebro-Computador , Redes Neurales de la Computación , Toma de Decisiones , HumanosRESUMEN
Procedural flow disruptions secondary to interruptions play a key role in error occurrence during complex medical procedures, mainly because they increase mental workload among team members, negatively impacting team performance and patient safety. Since certain types of interruptions are unavoidable, and consequently the need for multitasking is inherent to complex procedural care, this field can benefit from an intelligent system capable of identifying in which moment flow interference is appropriate without generating disruptions. In the present study we describe a novel approach for the identification of tasks imposing low cognitive load and tasks that demand high cognitive effort during real-life cardiac surgeries. We used heart rate variability analysis as an objective measure of cognitive load, capturing data in a real-time and unobtrusive manner from multiple team members (surgeon, anesthesiologist and perfusionist) simultaneously. Using audio-video recordings, behavioral coding and a hierarchical surgical process model, we integrated multiple data sources to create an interactive surgical dashboard, enabling the identification of specific steps, substeps and tasks that impose low cognitive load. An interruption management system can use these low demand situations to guide the surgical team in terms of the appropriateness of flow interruptions. The described approach also enables us to detect cognitive load fluctuations over time, under specific conditions (e.g. emergencies) or in situations that are prone to errors. An in-depth understanding of the relationship between cognitive overload states, task demands, and error occurrence will drive the development of cognitive supporting systems that recognize and mitigate errors efficiently and proactively during high complex procedures.
RESUMEN
In the surgical setting, team members constantly deal with a high-demand operative environment that requires simultaneously processing a large amount of information. In certain situations, high demands imposed by surgical tasks and other sources may exceed team member's cognitive capacity, leading to cognitive overload which may place patient safety at risk. In the present study, we describe a novel approach to integrate an objective measure of team member's cognitive load with procedural, behavioral and contextual data from real-life cardiac surgeries. We used heart rate variability analysis, capturing data simultaneously from multiple team members (surgeon, anesthesiologist and perfusionist) in a real-time and unobtrusive manner. Using audio-video recordings, behavioral coding and a hierarchical surgical process model, we integrated multiple data sources to create an interactive surgical dashboard, enabling the analysis of the cognitive load imposed by specific steps, substeps and/or tasks. The described approach enables us to detect cognitive load fluctuations over time, under specific conditions (e.g. emergencies, teaching) and in situations that are prone to errors. This in-depth understanding of the relationship between cognitive load, task demands and error occurrence is essential for the development of cognitive support systems to recognize and mitigate errors during complex surgical care in the operating room.
RESUMEN
Two different perspectives are the main focus of this book chapter: (1) A perspective that looks to the future, with the goal of devising rational associations of targeted inhibitors against distinct altered signaling-network pathways. This goal implies a sufficiently in-depth molecular diagnosis of the personal cancer of a given patient. A sufficiently robust and extended dynamic modeling will suggest rational combinations of the abovementioned oncoprotein inhibitors. The work toward new selective drugs, in the field of medicinal chemistry, is very intensive. Rational associations of selective drug inhibitors will become progressively a more realistic goal within the next 3-5 years. Toward the possibility of an implementation in standard oncologic structures of technologically sufficiently advanced countries, new (legal) rules probably will have to be established through a consensus process, at the level of both diagnostic and therapeutic behaviors.(2) The cancer patient of today is not the patient of 5-10 years from now. How to support the choice of the most convenient (and already clinically allowed) treatment for an individual cancer patient, as of today? We will consider the present level of artificial intelligence (AI) sophistication and the continuous feeding, updating, and integration of cancer-related new data, in AI systems. We will also report briefly about one of the most important projects in this field: IBM Watson US Cancer Centers. Allowing for a temporal shift, in the long term the two perspectives should move in the same direction, with a necessary time lag between them.
Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Oncología Médica , Modelos Biológicos , Neoplasias , Transducción de Señal , Biología de Sistemas , Biología Computacional/métodos , Simulación por Computador , Bases de Datos Genéticas , Humanos , Oncología Médica/métodos , Neoplasias/etiología , Neoplasias/metabolismo , Neoplasias/terapia , Medicina de Precisión/métodos , Proyectos de Investigación , Biología de Sistemas/métodosRESUMEN
Continuous daily stress and high workload can have negative effects on individuals' physical and mental well-being. It has been shown that physiological signals may support the prediction of stress and workload. However, previous research is limited by the low diversity of signals concurring to such predictive tasks and controlled experimental design. In this paper we present 1) a pipeline for continuous and real-life acquisition of physiological and inertial signals 2) a mobile agent application for on-the-go event annotation and 3) an end-to-end signal processing and classification system for stress and workload from diverse signal streams. We study physiological signals such as Galvanic Skin Response (GSR), Skin Temperature (ST), Inter Beat Interval (IBI) and Blood Volume Pulse (BVP) collected using a non-invasive wearable device; and inertial signals collected from accelerometer and gyroscope sensors. We combine them with subjects' inputs (e.g. event tagging) acquired using the agent application, and their emotion regulation scores. In our experiments we explore signal combination and selection techniques for stress and workload prediction from subjects whose signals have been recorded continuously during their daily life. The end-to-end classification system is described for feature extraction, signal artifact removal, and classification. We show that a combination of physiological, inertial and user event signals provides accurate prediction of stress for real-life users and signals.
Asunto(s)
Estrés Psicológico , Carga de Trabajo/psicología , Acelerometría , Adulto , Volumen Sanguíneo/fisiología , Emociones , Femenino , Respuesta Galvánica de la Piel/fisiología , Frecuencia Cardíaca/fisiología , Humanos , Entrevistas como Asunto , Masculino , Persona de Mediana Edad , Procesamiento de Señales Asistido por Computador , Temperatura Cutánea/fisiologíaRESUMEN
Early detection of essential hypertension can support the prevention of cardiovascular disease, a leading cause of death. The traditional method of identification of hypertension involves periodic blood pressure measurement using brachial cuff-based measurement devices. While these devices are non-invasive, they require manual setup for each measurement and they are not suitable for continuous monitoring. Research has shown that physiological signals such as Heart Rate Variability, which is a measure of the cardiac autonomic activity, is correlated with blood pressure. Wearable devices capable of measuring physiological signals such as Heart Rate, Galvanic Skin Response, Skin Temperature have recently become ubiquitous. However, these signals are not accurate and are prone to noise due to different artifacts. In this paper a) we present a data collection protocol for continuous non-invasive monitoring of physiological signals from wearable devices; b) we implement signal processing techniques for signal estimation; c) we explore how the continuous monitoring of these physiological signals can be used to identify hypertensive patients; d) We conduct a pilot study with a group of normotensive and hypertensive patients to test our techniques. We show that physiological signals extracted from wearable devices can distinguish between these two groups with high accuracy.