Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros








Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(12)2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35746121

RESUMO

COVID-19 occurs due to infection through respiratory droplets containing the SARS-CoV-2 virus, which are released when someone sneezes, coughs, or talks. The gold-standard exam to detect the virus is Real-Time Polymerase Chain Reaction (RT-PCR); however, this is an expensive test and may require up to 3 days after infection for a reliable result, and if there is high demand, the labs could be overwhelmed, which can cause significant delays in providing results. Biomedical data (oxygen saturation level-SpO2, body temperature, heart rate, and cough) are acquired from individuals and are used to help infer infection by COVID-19, using machine learning algorithms. The goal of this study is to introduce the Integrated Portable Medical Assistant (IPMA), which is a multimodal piece of equipment that can collect biomedical data, such as oxygen saturation level, body temperature, heart rate, and cough sound, and helps infer the diagnosis of COVID-19 through machine learning algorithms. The IPMA has the capacity to store the biomedical data for continuous studies and can be used to infer other respiratory diseases. Quadratic kernel-free non-linear Support Vector Machine (QSVM) and Decision Tree (DT) were applied on three datasets with data of cough, speech, body temperature, heart rate, and SpO2, obtaining an Accuracy rate (ACC) and Area Under the Curve (AUC) of approximately up to 88.0% and 0.85, respectively, as well as an ACC up to 99% and AUC = 0.94, respectively, for COVID-19 infection inference. When applied to the data acquired with the IMPA, these algorithms achieved 100% accuracy. Regarding the easiness of using the equipment, 36 volunteers reported that the IPMA has a high usability, according to results from two metrics used for evaluation: System Usability Scale (SUS) and Post Study System Usability Questionnaire (PSSUQ), with scores of 85.5 and 1.41, respectively. In light of the worldwide needs for smart equipment to help fight the COVID-19 pandemic, this new equipment may help with the screening of COVID-19 through data collected from biomedical signals and cough sounds, as well as the use of machine learning algorithms.


Assuntos
COVID-19 , Algoritmos , COVID-19/diagnóstico , Tosse/diagnóstico , Humanos , Aprendizado de Máquina , Pandemias , SARS-CoV-2
2.
Physiol Meas ; 43(7)2022 07 25.
Artigo em Inglês | MEDLINE | ID: mdl-35728793

RESUMO

Objective.This study proposes a U-net shaped Deep Neural Network (DNN) model to extract remote photoplethysmography (rPPG) signals from skin color signals to estimate Pulse Rate (PR).Approach.Three input window sizes are used in the DNN: 256 samples (5.12 s), 512 samples (10.24 s), and 1024 (20.48 s). A data augmentation algorithm based on interpolation is also used here to artificially increase the number of training samples.Main results.The proposed model outperformed a prior-knowledge rPPG method by using input signals with window of 256 and 512 samples. Also, it was found that the data augmentation procedure only increased the performance for the window of 1024 samples. The trained model achieved a Mean Absolute Error (MAE) of 3.97 Beats per Minute (BPM) and Root Mean Squared Error (RMSE) of 6.47 BPM, for the 256 samples window, and MAE of 3.00 BPM and RMSE of 5.45 BPM for the window of 512 samples. On the other hand, the prior-knowledge rPPG method got a MAE of 8.04 BPM and RMSE of 16.63 BPM for the window of 256 samples, and MAE of 3.49 BPM and RMSE of 7.92 BPM for the window of 512 samples. For the longest window (1024 samples), the concordance of the predicted PRs from the DNNs and the true PRs was higher when applying the data augmentation procedure.Significance.These results demonstrate a big potential of this technique for PR estimation, showing that the DNN proposed here may generate reliable rPPG signals even with short window lengths (5.12 s and 10.24 s), suggesting that it needs less data for a faster rPPG measurement and PR estimation.


Assuntos
Aprendizado Profundo , Fotopletismografia , Algoritmos , Frequência Cardíaca , Redes Neurais de Computação , Fotopletismografia/métodos , Processamento de Sinais Assistido por Computador
3.
Sensors (Basel) ; 21(24)2021 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-34960514

RESUMO

This work introduces a new socially assistive robot termed MARIA T21 (meaning "Mobile Autonomous Robot for Interaction with Autistics", with the addition of the acronym T21, meaning "Trisomy 21", which is used to designate individuals with Down syndrome). This new robot is used in psychomotor therapies for children with Down syndrome (contributing to improve their proprioception, postural balance, and gait) as well as in psychosocial and cognitive therapies for children with autism spectrum disorder. The robot uses, as a novelty, an embedded mini-video projector able to project Serious Games on the floor or tables to make already-established therapies funnier to these children, thus creating a motivating and facilitating effect for both children and therapists. The Serious Games were developed in Python through the library Pygame, considering theoretical bases of behavioral psychology for these children, which are integrated into the robot through the robot operating system (ROS). Encouraging results from the child-robot interaction are shown, according to outcomes obtained from the application of the Goal Attainment Scale. Regarding the Serious Games, they were considered suitable based on both the "Guidelines for Game Design of Serious Games for Children" and the "Evaluation of the Psychological Bases" used during the games' development. Thus, this pilot study seeks to demonstrate that the use of a robot as a therapeutic tool together with the concept of Serious Games is an innovative and promising tool to help health professionals in conducting therapies with children with autistic spectrum disorder and Down syndrome. Due to health issues imposed by the COVID-19 pandemic, the sample of children was limited to eight children (one child with typical development, one with Trisomy 21, both female, and six children with ASD, one girl and five boys), from 4 to 9 years of age. For the non-typically developing children, the inclusion criterion was the existence of a conclusive diagnosis and fulfillment of at least 1 year of therapy. The protocol was carried out in an infant psychotherapy room with three video cameras, supervised by a group of researchers and a therapist. The experiments were separated into four steps: The first stage was composed of a robot introduction followed by an approximation between robot and child to establish eye contact and assess proxemics and interaction between child/robot. In the second stage, the robot projected Serious Games on the floor, and emitted verbal commands, seeking to evaluate the child's susceptibility to perform the proposed tasks. In the third stage, the games were performed for a certain time, with the robot sending messages of positive reinforcement to encourage the child to accomplish the game. Finally, in the fourth stage, the robot finished the games and said goodbye to the child, using messages aiming to build a closer relationship with the child.


Assuntos
Transtorno do Espectro Autista , COVID-19 , Síndrome de Down , Robótica , Transtorno do Espectro Autista/terapia , Síndrome de Down/terapia , Feminino , Humanos , Masculino , Pandemias , Projetos Piloto , SARS-CoV-2
4.
Sensors (Basel) ; 19(13)2019 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-31248004

RESUMO

Child-Robot Interaction (CRI) has become increasingly addressed in research and applications. This work proposes a system for emotion recognition in children, recording facial images by both visual (RGB-red, green and blue) and Infrared Thermal Imaging (IRTI) cameras. For this purpose, the Viola-Jones algorithm is used on color images to detect facial regions of interest (ROIs), which are transferred to the thermal camera plane by multiplying a homography matrix obtained through the calibration process of the camera system. As a novelty, we propose to compute the error probability for each ROI located over thermal images, using a reference frame manually marked by a trained expert, in order to choose that ROI better placed according to the expert criteria. Then, this selected ROI is used to relocate the other ROIs, increasing the concordance with respect to the reference manual annotations. Afterwards, other methods for feature extraction, dimensionality reduction through Principal Component Analysis (PCA) and pattern classification by Linear Discriminant Analysis (LDA) are applied to infer emotions. The results show that our approach for ROI locations may track facial landmarks with significant low errors with respect to the traditional Viola-Jones algorithm. These ROIs have shown to be relevant for recognition of five emotions, specifically disgust, fear, happiness, sadness, and surprise, with our recognition system based on PCA and LDA achieving mean accuracy (ACC) and Kappa values of 85.75% and 81.84%, respectively. As a second stage, the proposed recognition system was trained with a dataset of thermal images, collected on 28 typically developing children, in order to infer one of five basic emotions (disgust, fear, happiness, sadness, and surprise) during a child-robot interaction. The results show that our system can be integrated to a social robot to infer child emotions during a child-robot interaction.


Assuntos
Emoções/fisiologia , Face/diagnóstico por imagem , Expressão Facial , Processamento de Imagem Assistida por Computador , Algoritmos , Criança , Análise Discriminante , Medo/fisiologia , Feminino , Humanos , Masculino , Reconhecimento Visual de Modelos/fisiologia , Robótica
5.
PLoS One ; 14(3): e0212928, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30893343

RESUMO

Physiological signals may be used as objective markers to identify emotions, which play relevant roles in social and daily life. To measure these signals, the use of contact-free techniques, such as Infrared Thermal Imaging (IRTI), is indispensable to individuals who have sensory sensitivity. The goal of this study is to propose an experimental design to analyze five emotions (disgust, fear, happiness, sadness and surprise) from facial thermal images of typically developing (TD) children aged 7-11 years using emissivity variation, as recorded by IRTI. For the emotion analysis, a dataset considered emotional dimensions (valence and arousal), facial bilateral sides and emotion classification accuracy. The results evidence the efficiency of the experimental design with interesting findings, such as the correlation between the valence and the thermal decrement in nose; disgust and happiness as potent triggers of facial emissivity variations; and significant emissivity variations in nose, cheeks and periorbital regions associated with different emotions. Moreover, facial thermal asymmetry was revealed with a distinct thermal tendency in the cheeks, and classification accuracy reached a mean value greater than 85%. From the results, the emissivity variations were an efficient marker to analyze emotions in facial thermal images, and IRTI was confirmed to be an outstanding technique to study emotions. This study contributes a robust dataset to analyze the emotions of 7-11-year-old TD children, an age range for which there is a gap in the literature.


Assuntos
Técnicas de Observação do Comportamento/métodos , Comportamento Infantil/fisiologia , Modelos Psicológicos , Termografia/métodos , Técnicas de Observação do Comportamento/instrumentação , Temperatura Corporal/fisiologia , Criança , Conjuntos de Dados como Assunto , Emoções/fisiologia , Face/diagnóstico por imagem , Face/fisiologia , Estudos de Viabilidade , Feminino , Humanos , Raios Infravermelhos , Masculino , Tempo de Reação , Termografia/instrumentação
6.
Sensors (Basel) ; 16(7)2016 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-27447634

RESUMO

This paper presents the development of a smart walker that uses a formation controller in its displacements. Encoders, a laser range finder and ultrasound are the sensors used in the walker. The control actions are based on the user (human) location, who is the actual formation leader. There is neither a sensor attached to the user's body nor force sensors attached to the arm supports of the walker, and thus, the control algorithm projects the measurements taken from the laser sensor into the user reference and, then, calculates the linear and angular walker's velocity to keep the formation (distance and angle) in relation to the user. An algorithm was developed to detect the user's legs, whose distances from the laser sensor provide the information necessary to the controller. The controller was theoretically analyzed regarding its stability, simulated and validated with real users, showing accurate performance in all experiments. In addition, safety rules are used to check both the user and the device conditions, in order to guarantee that the user will not have any risks when using the smart walker. The applicability of this device is for helping people with lower limb mobility impairments.


Assuntos
Robótica/métodos , Caminhada/fisiologia , Algoritmos , Humanos , Robótica/instrumentação
7.
Res. Biomed. Eng. (Online) ; 32(2): 161-175, Apr.-June 2016. tab, graf
Artigo em Inglês | LILACS | ID: biblio-829473

RESUMO

Abstract Introduction Autism Spectrum Disorder is a set of developmental disorders that imply in poor social skills, lack of interest in activities and interaction with people. Treatments rely on teaching social skills and in such therapies robotics may offer aid. This work is a pilot study, which aims to show the development and usage of a ludic mobile robot for stimulating social skills in ASD children. Methods A mobile robot with a special costume and a monitor to display multimedia contents was designed to interact with ASD children. A mediator controls the robot’s movements in a room prepared for interactive sessions. Sessions are recorded to assess the following social skills: eye gazing, touching the robot and imitating the mediator. The interaction is evaluated using the Goal Attainment Scale and Likert scale. Ten children were evaluated (50% with ASD), using as inclusion criteria children with age 7-8, without use of medication, and without tendency to aggression or stereotyped movements. Results It was observed that the ASD group touched the robot about twice more in average than the control group (CG). They also looked away and imitated the mediator in a quite similar way as the CG, and showed extra social skills (verbal and non-verbal communication). These results are considered an advance in terms of improvement of social skills in ASD children. Conclusions Our studies indicate that the robot stimulated social skills in 4/5 of the ASD children, which shows that its concepts are useful to improve socialization and quality of life.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA