Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 14(1): 21558, 2024 09 16.
Article in English | MEDLINE | ID: mdl-39285215

ABSTRACT

Human movement augmentation is a rising field of research. A promising control strategy for augmented effectors involves utilizing electroencephalography through motor imagery (MI) functions. However, performing MI of a supernumerary effector is challenging, to which MI training is one potential solution. In this study, we investigate the validity of a virtual reality (VR) environment as a medium for eliciting MI neural activations for a supernumerary thumb. Specifically, we assess whether it is possible to induce a distinct neural signature for MI of a supernumerary thumb in VR. Twenty participants underwent a two-fold experiment in which they observed movements of natural and supernumerary thumbs, then engaged in MI of the observed movements. Spectral power and event related desynchronization (ERD) analyses at the group level showed that the MI signature associated with the supernumerary thumb was indeed distinct, significantly different from both the baseline and the MI signature associated with the natural thumb, while single-trial classification showed that it is distinguishable with a 78% and 69% classification accuracy, respectively. Furthermore, spectral power and ERD analyses at the group level showed that the MI signatures associated with directional movement of the supernumerary thumb, flexion and extension, were also significantly different, and single-trial classification demonstrated that these movements could be distinguished with 60% accuracy. Fine-tuning the models further increased the respective classification accuracies, indicating the potential presence of personalized features across subjects.


Subject(s)
Electroencephalography , Movement , Thumb , Virtual Reality , Humans , Thumb/physiology , Electroencephalography/methods , Male , Female , Adult , Young Adult , Movement/physiology , Imagination/physiology
2.
Neuron ; 111(21): 3371-3374, 2023 11 01.
Article in English | MEDLINE | ID: mdl-37918356

ABSTRACT

An AI and robotics researcher, and an entrepreneur, Hanan Salam tells Neuron about her work on Artificial Social Intelligence and experience as the co-founder of Women in AI. Her passion for science, equity, and education nurtures her advocacy of technology for the common good and her activism for the empowerment of women.


Subject(s)
Artificial Intelligence , Robotics , Humans , Female , Technology
3.
Artif Intell Med ; 144: 102638, 2023 10.
Article in English | MEDLINE | ID: mdl-37783543

ABSTRACT

In this paper, we propose a holistic AI-based pharmacovigilance optimization approach using patient's social media data. Instead of focusing on the detection and identification of Adverse Drug Events (ADE) in social media posts in single time points, we propose a holistic approach that looks at the evolution of different user behavior indicators in time. We examine various NLP-based indicators such as word frequency, semantic similarity, Adverse Drug Reactions mentions, and sentiment analysis. We introduce a classification approach to identify normal vs. abnormal time periods based on patient comments. This approach, along with user behavior indicators, can optimize the pharmacovigilance process by flagging the need for immediate attention and further investigation. We specifically focus on the Levothyrox® case in France, which sparked media attention due to changes in the medication formula and affected patient behavior on medical forums. For classification, we propose a deep learning architecture called Word Cloud Convolutional Neural Network (WC-CNN), trained on word clouds from patient comments. We evaluate different temporal resolutions and NLP pre-processing techniques, finding that monthly resolution and the proposed indicators can effectively detect new safety signals, with an accuracy of 75%. We have made the code open source, available via github.


Subject(s)
Drug-Related Side Effects and Adverse Reactions , Social Media , Humans , Pharmacovigilance , Neural Networks, Computer , Drug-Related Side Effects and Adverse Reactions/epidemiology , Semantics
4.
Comput Methods Programs Biomed ; 211: 106433, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34614452

ABSTRACT

BACKGROUND AND OBJECTIVE: Major Depressive Disorder is a highly prevalent and disabling mental health condition. Numerous studies explored multimodal fusion systems combining visual, audio, and textual features via deep learning architectures for clinical depression recognition. Yet, no comparative analysis for multimodal depression analysis has been proposed in the literature. METHODS: In this paper, an up-to-date literature overview of multimodal depression recognition is presented and an extensive comparative analysis of different deep learning architectures for depression recognition is performed. First, audio features based Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) are studied. Then, early-level and model-level fusion of deep audio features with visual and textual features through LSTM and CNN architectures are investigated. RESULTS: The performance of the proposed architectures using an hold-out strategy on the DAIC-WOZ dataset (80% training, 10% validation, 10% test split) for binary and severity levels of depression recognition is tested. Using this strategy, a set of experiments have been performed and they have demonstrated: (1) LSTM-based audio features perform slightly better than CNN ones with an accuracy of 66.25% versus 65.60% for binary depression classes. (2) the model level fusion of deep audio and visual features using LSTM network performed the best with an accuracy of 77.16%, a precision of 53% for the depressed class, and a precision of 83% for the non-depressed class. The given network obtained a normalized Root Mean Square Error (RMSE) of 0.15 for depression severity level prediction. Using a Leave-One-Subject-Out strategy, this network achieved an accuracy of 95.38% for binary depression detection, and a normalized RMSE of 0.1476 for depression severity level prediction. Our best-performing architecture outperforms all state-of-the-art approaches on DAIC-WOZ dataset. CONCLUSIONS: The obtained results show that the proposed LSTM-based surpass the proposed CNN-based architectures allowing to learn temporal dynamics representations of multimodal features. Furthermore, model-level fusion of audio and visual features using an LSTM network leads to the best performance. Our best-performing architecture successfully detects depression using a speech segment of less than 8 seconds, and an average prediction computation time of less than 6ms; making it suitable for real-world clinical applications.


Subject(s)
Depressive Disorder, Major , Depression/diagnosis , Depressive Disorder, Major/diagnosis , Humans , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL