RESUMO
Disease detection from smartphone data represents an open research challenge in mobile health (m-health) systems. COVID-19 and its respiratory symptoms are an important case study in this area and their early detection is a potential real instrument to counteract the pandemic situation. The efficacy of this solution mainly depends on the performances of AI algorithms applied to the collected data and their possible implementation directly on the users' mobile devices. Considering these issues, and the limited amount of available data, in this paper we present the experimental evaluation of 3 different deep learning models, compared also with hand-crafted features, and of two main approaches of transfer learning in the considered scenario: both feature extraction and fine-tuning. Specifically, we considered VGGish, YAMNET, and L3-Net (including 12 different configurations) evaluated through user-independent experiments on 4 different datasets (13,447 samples in total). Results clearly show the advantages of L3-Net in all the experimental settings as it overcomes the other solutions by 12.3% in terms of Precision-Recall AUC as features extractor, and by 10% when the model is fine-tuned. Moreover, we note that to fine-tune only the fully-connected layers of the pre-trained models generally leads to worse performances, with an average drop of 6.6% with respect to feature extraction. Finally, we evaluate the memory footprints of the different models for their possible applications on commercial mobile devices.
RESUMO
Wearable sensing devices can provide high-resolution data useful to characterise and identify complex human behaviours. Sensing human social interactions through wearable devices represents one of the emerging field in mobile social sensing, considering their impact on different user categories and on different social contexts. However, it is important to limit the collection and use of sensitive information characterising individual users and their social interactions in order to maintain the user compliance. For this reason, we decided to focus mainly on physical proximity and, specifically, on the analysis of BLE wireless signals commonly used by commercial mobile devices. In this work, we present the SocializeME framework designed to collect proximity information and to detect social interactions through heterogeneous personal mobile devices. We also present the results of an experimental data collection campaign conducted with real users, highlighting technical limitations and performances in terms of quality of RSS, packet loss, and channel symmetry, and how they are influenced by different configurations of the user's body and the position of the personal device. Specifically, we obtained a dataset with more than 820.000 Bluetooth signals (BLE beacons) collected, with a total monitoring of over 11 h. The dataset collected reproduces 4 different configurations by mixing two user posture's layouts (standing and sitting) and different positions of the receiver device (in hand, in the front pocket and in the back pocket). The large number of experiments in those different configurations, well cover the common way of holding a mobile device, and the layout of a dyad involved in a social interaction. We also present the results obtained by SME-D algorithm, designed to automatically detect social interactions based on the collected wireless signals, which obtained an overall accuracy of 81.56% and F-score 84.7%. The collected and labelled dataset is also released to the mobile social sensing community in order to evaluate and compare new algorithms.
RESUMO
Nowadays Artificial Intelligence (AI) has become a fundamental component of healthcare applications, both clinical and remote, but the best performing AI systems are often too complex to be self-explaining. Explainable AI (XAI) techniques are defined to unveil the reasoning behind the system's predictions and decisions, and they become even more critical when dealing with sensitive and personal health data. It is worth noting that XAI has not gathered the same attention across different research areas and data types, especially in healthcare. In particular, many clinical and remote health applications are based on tabular and time series data, respectively, and XAI is not commonly analysed on these data types, while computer vision and Natural Language Processing (NLP) are the reference applications. To provide an overview of XAI methods that are most suitable for tabular and time series data in the healthcare domain, this paper provides a review of the literature in the last 5 years, illustrating the type of generated explanations and the efforts provided to evaluate their relevance and quality. Specifically, we identify clinical validation, consistency assessment, objective and standardised quality evaluation, and human-centered quality assessment as key features to ensure effective explanations for the end users. Finally, we highlight the main research challenges in the field as well as the limitations of existing XAI methods.
RESUMO
This paper describes a data collection campaign and the resulting dataset derived from smartphone sensors characterizing the daily life activities of 3 volunteers in a period of two weeks. The dataset is released as a collection of CSV files containing more than 45K data samples, where each sample is composed by 1332 features related to a heterogeneous set of physical and virtual sensors, including motion sensors, running applications, devices in proximity, and weather conditions. Moreover, each data sample is associated with a ground truth label that describes the user activity and the situation in which she was involved during the sensing experiment (e.g., working, at restaurant, and doing sport activity). To avoid introducing any bias during the data collection, we performed the sensing experiment in-the-wild, that is, by using the volunteers' devices, and without defining any constraint related to the user's behavior. For this reason, the collected dataset represents a useful source of real data to both define and evaluate a broad set of novel context-aware solutions (both algorithms and protocols) that aim to adapt their behavior according to the changes in the user's situation in a mobile environment.
RESUMO
This paper describes a data collection campaign and a dataset of BLE beacons for detecting and analysing human social interactions. The dataset has been collected by involving 15 volunteers that interacted in indoor environments for a total of 11 hours of activity. The dataset is released as a collection of CSV files with a timestamp, RSSI (Received Signal Strength Indicator) and a unique identifier of the emitting and of the receiving devices. Volunteers wear a wristband equipped with BLE tags emitting beacons at a fixed rate, and a mobile application able to collect and to store beacons. We organized 6 interaction sessions, designed to reproduce the three common stages of an interaction (Non Interaction, Approaching and Interaction). Moreover, we reproduced interactions by varying the volunteer's posture as well as the position of the receiving device. The dataset is released with a ground truth annotation reporting the exact time intervals during which volunteers actually interacted. The combination of such factors, provides a rich dataset useful to experiment algorithms for detecting interactions and for analyzing dynamics of interactions in a real-world setting. We present in detail the dataset and its evaluation in "Sensing Social Interactions through BLE Beacons and Commercial Mobile Devices", in which we focus on two orthogonal analysis: quality of the dataset and RSSI symmetry of the channel during the interaction stage between pairs of users.