Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters











Publication year range
1.
Methods Protoc ; 7(2)2024 Mar 08.
Article in English | MEDLINE | ID: mdl-38525781

ABSTRACT

Gaining practical experience is indispensable for medical students. Therefore, when medical students were prevented access to hospitals during the COVID-19 pandemic in Romania, there was an urgent need to find a solution that would allow medical students to develop the skills they would usually develop in hospitals but without the need to be physically present in a hospital. This was the reason behind the idea of developing a Virtual Case Presentation Platform. The platform offers the possibility for medical students to reproduce virtually, in clinically valid scenarios, the diagnostic process and treatment recommendation, as well as the interactions with patients that usually take place in hospitals using natural language through speech and text. On the platform, the students receive valuable feedback from the professors about their performance. In order to reproduce the whole targeted experience for students, without missing anything, before starting the development of the platform, it was mandatory to identify and understand all the aspects that should be covered by the platform. The proposed platform covers the different aspects that have been identified for the diagnostic process and treatment recommendation. It enables medical students to develop essential skills for their future careers as doctors.

2.
Sensors (Basel) ; 23(20)2023 Oct 15.
Article in English | MEDLINE | ID: mdl-37896566

ABSTRACT

Autonomous driving is a complex task that requires high-level hierarchical reasoning. Various solutions based on hand-crafted rules, multi-modal systems, or end-to-end learning have been proposed over time but are not quite ready to deliver the accuracy and safety necessary for real-world urban autonomous driving. Those methods require expensive hardware for data collection or environmental perception and are sensitive to distribution shifts, making large-scale adoption impractical. We present an approach that solely uses monocular camera inputs to generate valuable data without any supervision. Our main contributions involve a mechanism that can provide steering data annotations starting from unlabeled data alongside a different pipeline that generates path labels in a completely self-supervised manner. Thus, our method represents a natural step towards leveraging the large amounts of available online data ensuring the complexity and the diversity required to learn a robust autonomous driving policy.

3.
Healthcare (Basel) ; 11(19)2023 Sep 29.
Article in English | MEDLINE | ID: mdl-37830693

ABSTRACT

(1) Objective: We explore the predictive power of a novel stream of patient data, combining wearable devices and patient reported outcomes (PROs), using an AI-first approach to classify the health status of Parkinson's disease (PD), multiple sclerosis (MS) and stroke patients (collectively named PMSS). (2) Background: Recent studies acknowledge the burden of neurological disorders on patients and on the healthcare systems managing them. To address this, effort is invested in the digital transformation of health provisioning for PMSS patients. (3) Methods: We introduce the data collection journey within the ALAMEDA project, which continuously collects PRO data for a year through mobile applications and supplements them with data from minimally intrusive wearable devices (accelerometer bracelet, IMU sensor belt, ground force measuring insoles, and sleep mattress) worn for 1-2 weeks at each milestone. We present the data collection schedule and its feasibility, the mapping of medical predictor variables to wearable device capabilities and mobile application functionality. (4) Results: A novel combination of wearable devices and smartphone applications required for the desired analysis of motor, sleep, emotional and quality-of-life outcomes is introduced. AI-first analysis methods are presented that aim to uncover the prediction capability of diverse longitudinal and cross-sectional setups (in terms of standard medical test targets). Mobile application development and usage schedule facilitates the retention of patient engagement and compliance with the study protocol.

4.
Healthcare (Basel) ; 11(12)2023 Jun 19.
Article in English | MEDLINE | ID: mdl-37372920

ABSTRACT

Stroke is one of the leading causes of disability and death worldwide, a severe medical condition for which new solutions for prevention, monitoring, and adequate treatment are needed. This paper proposes a SDM framework for the development of innovative and effective solutions based on artificial intelligence in the rehabilitation of stroke patients by empowering patients to make decisions about the use of devices and applications developed in the European project ALAMEDA. To develop a predictive tool for improving disability in stroke patients, key aspects of stroke patient data collection journeys, monitored health parameters, and specific variables covering motor, physical, emotional, cognitive, and sleep status are presented. The proposed SDM model involved the training and consultation of patients, medical staff, carers, and representatives under the name of the Local Community Group. Consultation with LCG members, consists of 11 representative people, physicians, nurses, patients and caregivers, which led to the definition of a methodological framework to investigate the key aspects of monitoring the patient data collection journey for the stroke pilot, and a specific questionnaire to collect stroke patient requirements and preferences. A set of general and specific guidelines specifying the principles by which patients decide to use wearable sensing devices and specific applications resulted from the analysis of the data collected using the questionnaire. The preferences and recommendations collected from LCG members have already been implemented in this stage of ALAMEDA system design and development.

5.
Sensors (Basel) ; 22(19)2022 Sep 20.
Article in English | MEDLINE | ID: mdl-36236213

ABSTRACT

Human action recognition has a wide range of applications, including Ambient Intelligence systems and user assistance. Starting from the recognized actions performed by the user, a better human-computer interaction can be achieved, and improved assistance can be provided by social robots in real-time scenarios. In this context, the performance of the prediction system is a key aspect. The purpose of this paper is to introduce a neural network approach based on various types of convolutional layers that can achieve a good performance in recognizing actions but with a high inference speed. The experimental results show that our solution, based on a combination of graph convolutional networks (GCN) and temporal convolutional networks (TCN), is a suitable approach that reaches the proposed goal. In addition to the neural network model, we design a pipeline that contains two stages for obtaining relevant geometric features, data augmentation and data preprocessing, also contributing to an increased performance.


Subject(s)
Neural Networks, Computer , Recognition, Psychology , Humans , Skeleton
6.
Sensors (Basel) ; 22(15)2022 Aug 05.
Article in English | MEDLINE | ID: mdl-35957414

ABSTRACT

Currently, the importance of autonomous operating devices is rising with the increasing number of applications that run on robotic platforms or self-driving cars. The context of social robotics assumes that robotic platforms operate autonomously in environments where people perform their daily activities. The ability to re-identify the same people through a sequence of images is a critical component for meaningful human-robot interactions. Considering the quick reactions required by a self-driving car for safety considerations, accurate real-time tracking and people trajectory prediction are mandatory. In this paper, we introduce a real-time people re-identification system based on a trajectory prediction method. We tackled the problem of trajectory prediction by introducing a system that combines semantic information from the environment with social influence from the other participants in the scene in order to predict the motion of each individual. We evaluated the system considering two possible case studies, social robotics and autonomous driving. In the context of social robotics, we integrated the proposed re-identification system as a module into the AMIRO framework that is designed for social robotic applications and assistive care scenarios. We performed multiple experiments in order to evaluate the performance of our proposed method, considering both the trajectory prediction component and the person re-identification system. We assessed the behaviour of our method on existing datasets and on real-time acquired data to obtain a quantitative evaluation of the system and a qualitative analysis. We report an improvement of over 5% for the MOTA metric when comparing our re-identification system with the existing module, on both evaluation scenarios, social robotics and autonomous driving.


Subject(s)
Robotics , Humans , Motion , Robotics/methods
7.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 7638-7656, 2022 Nov.
Article in English | MEDLINE | ID: mdl-34648435

ABSTRACT

We propose a dual system for unsupervised object segmentation in video, which brings together two modules with complementary properties: a space-time graph that discovers objects in videos and a deep network that learns powerful object features. The system uses an iterative knowledge exchange policy. A novel spectral space-time clustering process on the graph produces unsupervised segmentation masks passed to the network as pseudo-labels. The net learns to segment in single frames what the graph discovers in video and passes back to the graph strong image-level features that improve its node-level features in the next iteration. Knowledge is exchanged for several cycles until convergence. The graph has one node per each video pixel, but the object discovery is fast. It uses a novel power iteration algorithm computing the main space-time cluster as the principal eigenvector of a special Feature-Motion matrix without actually computing the matrix. The thorough experimental analysis validates our theoretical claims and proves the effectiveness of the cyclical knowledge exchange. We also perform experiments on the supervised scenario, incorporating features pretrained with human supervision. We achieve state-of-the-art level on unsupervised and supervised scenarios on four challenging datasets: DAVIS, SegTrack, YouTube-Objects, and DAVSOD. We will make our code publicly available.

8.
Sensors (Basel) ; 21(6)2021 Mar 15.
Article in English | MEDLINE | ID: mdl-33803929

ABSTRACT

Action recognition plays an important role in various applications such as video monitoring, automatic video indexing, crowd analysis, human-machine interaction, smart homes and personal assistive robotics. In this paper, we propose improvements to some methods for human action recognition from videos that work with data represented in the form of skeleton poses. These methods are based on the most widely used techniques for this problem-Graph Convolutional Networks (GCNs), Temporal Convolutional Networks (TCNs) and Recurrent Neural Networks (RNNs). Initially, the paper explores and compares different ways to extract the most relevant spatial and temporal characteristics for a sequence of frames describing an action. Based on this comparative analysis, we show how a TCN type unit can be extended to work even on the characteristics extracted from the spatial domain. To validate our approach, we test it against a benchmark often used for human action recognition problems and we show that our solution obtains comparable results to the state-of-the-art, but with a significant increase in the inference speed.

9.
Sensors (Basel) ; 20(24)2020 Dec 18.
Article in English | MEDLINE | ID: mdl-33352943

ABSTRACT

Recent studies in social robotics show that it can provide economic efficiency and growth in domains such as retail, entertainment, and active and assisted living (AAL). Recent work also highlights that users have the expectation of affordable social robotics platforms, providing focused and specific assistance in a robust manner. In this paper, we present the AMIRO social robotics framework, designed in a modular and robust way for assistive care scenarios. The framework includes robotic services for navigation, person detection and recognition, multi-lingual natural language interaction and dialogue management, as well as activity recognition and general behavior composition. We present AMIRO platform independent implementation based on a Robot Operating System (ROS). We focus on quantitative evaluations of each functionality module, providing discussions on their performance in different settings and the possible improvements. We showcase the deployment of the AMIRO framework on a popular social robotics platform-the Pepper robot-and present the experience of developing a complex user interaction scenario, employing all available functionality modules within AMIRO.

10.
Sensors (Basel) ; 19(2)2019 Jan 21.
Article in English | MEDLINE | ID: mdl-30669628

ABSTRACT

Robust action recognition methods lie at the cornerstone of Ambient Assisted Living (AAL) systems employing optical devices. Using 3D skeleton joints extracted from depth images taken with time-of-flight (ToF) cameras has been a popular solution for accomplishing these tasks. Though seemingly scarce in terms of information availability compared to its RGB or depth image counterparts, the skeletal representation has proven to be effective in the task of action recognition. This paper explores different interpretations of both the spatial and the temporal dimensions of a sequence of frames describing an action. We show that rather intuitive approaches, often borrowed from other computer vision tasks, can improve accuracy. We report results based on these modifications and propose an architecture that uses temporal convolutions with results comparable to the state of the art.


Subject(s)
Bone and Bones/anatomy & histology , Imaging, Three-Dimensional , Joints/anatomy & histology , Movement , Pattern Recognition, Automated , Algorithms , Databases as Topic , Humans , Neural Networks, Computer , Time Factors
11.
Sensors (Basel) ; 14(6): 11110-34, 2014 Jun 23.
Article in English | MEDLINE | ID: mdl-24960085

ABSTRACT

In the field of ambient assisted living, the best results are achieved with systems that are less intrusive and more intelligent, that can easily integrate both formal and informal caregivers and that can easily adapt to the changes in the situation of the elderly or disabled person. This paper presents a graph-based representation for context information and a simple and intuitive method for situation recognition. Both the input and the results are easy to visualize, understand and use. Experiments have been performed on several AAL-specific scenarios.


Subject(s)
Assisted Living Facilities/methods , Geriatric Assessment/methods , Models, Theoretical , Monitoring, Ambulatory/methods , Pattern Recognition, Automated/methods , Aged , Aged, 80 and over , Equipment Design , Equipment Failure Analysis , Female , Humans , Male , Monitoring, Ambulatory/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL