Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
1.
Sensors (Basel) ; 24(6)2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38544014

RESUMO

This study investigates the characteristics of a novel origami-based, elastomeric actuator and a soft gripper, which are controlled by hand gestures that are recognized through machine learning algorithms. The lightweight paper-elastomer structure employed in this research exhibits distinct actuation features in four key areas: (1) It requires approximately 20% less pressure for the same bending amplitude compared to pneumatic network actuators (Pneu-Net) of equivalent weight, and even less pressure compared to other actuators with non-linear bending behavior; (2) The control of the device is examined by validating the relationship between pressure and the bending angle, as well as the interaction force and pressure at a fixed bending angle; (3) A soft robotic gripper comprising three actuators is designed. Enveloping and pinch grasping experiments are conducted on various shapes, which demonstrate the gripper's potential in handling a wide range of objects for numerous applications; and (4) A gesture recognition algorithm is developed to control the gripper using electromyogram (EMG) signals from the user's muscles.


Assuntos
Algoritmos , Elastômeros , Eletromiografia , Gestos , Aprendizado de Máquina
2.
Bioengineering (Basel) ; 10(10)2023 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-37892939

RESUMO

Generative models, such as Variational Autoencoders (VAEs), are increasingly employed for atypical pattern detection in brain imaging. During training, these models learn to capture the underlying patterns within "normal" brain images and generate new samples from those patterns. Neurodivergent states can be observed by measuring the dissimilarity between the generated/reconstructed images and the input images. This paper leverages VAEs to conduct Functional Connectivity (FC) analysis from functional Magnetic Resonance Imaging (fMRI) scans of individuals with Autism Spectrum Disorder (ASD), aiming to uncover atypical interconnectivity between brain regions. In the first part of our study, we compare multiple VAE architectures-Conditional VAE, Recurrent VAE, and a hybrid of CNN parallel with RNN VAE-aiming to establish the effectiveness of VAEs in application FC analysis. Given the nature of the disorder, ASD exhibits a higher prevalence among males than females. Therefore, in the second part of this paper, we investigate if introducing phenotypic data could improve the performance of VAEs and, consequently, FC analysis. We compare our results with the findings from previous studies in the literature. The results showed that CNN-based VAE architecture is more effective for this application than the other models.

4.
Front Robot AI ; 9: 880691, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36203792

RESUMO

This work describes the design of real-time dance-based interaction with a humanoid robot, where the robot seeks to promote physical activity in children by taking on multiple roles as a dance partner. It acts as a leader by initiating dances but can also act as a follower by mimicking a child's dance movements. Dances in the leader role are produced by a sequence-to-sequence (S2S) Long Short-Term Memory (LSTM) network trained on children's music videos taken from YouTube. On the other hand, a music orchestration platform is implemented to generate background music in the follower mode as the robot mimics the child's poses. In doing so, we also incorporated the largely unexplored paradigm of learning-by-teaching by including multiple robot roles that allow the child to both learn from and teach to the robot. Our work is among the first to implement a largely autonomous, real-time full-body dance interaction with a bipedal humanoid robot that also explores the impact of the robot roles on child engagement. Importantly, we also incorporated in our design formal constructs taken from autism therapy, such as the least-to-most prompting hierarchy, reinforcements for positive behaviors, and a time delay to make behavioral observations. We implemented a multimodal child engagement model that encompasses both affective engagement (displayed through eye gaze focus and facial expressions) as well as task engagement (determined by the level of physical activity) to determine child engagement states. We then conducted a virtual exploratory user study to evaluate the impact of mixed robot roles on user engagement and found no statistically significant difference in the children's engagement in single-role and multiple-role interactions. While the children were observed to respond positively to both robot behaviors, they preferred the music-driven leader role over the movement-driven follower role, a result that can partly be attributed to the virtual nature of the study. Our findings support the utility of such a platform in practicing physical activity but indicate that further research is necessary to fully explore the impact of each robot role.

5.
Sensors (Basel) ; 22(20)2022 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-36298054

RESUMO

Extended Kalman filter (EKF) is one of the most widely used Bayesian estimation methods in the optimal control area. Recent works on mobile robot control and transportation systems have applied various EKF methods, especially for localization. However, it is difficult to obtain adequate and reliable process-noise and measurement-noise models due to the complex and dynamic surrounding environments and sensor uncertainty. Generally, the default noise values of the sensors are provided by the manufacturer, but the values may frequently change depending on the environment. Thus, this paper mainly focuses on designing a highly accurate trainable EKF-based localization framework using inertial measurement units (IMUs) for the autonomous ground vehicle (AGV) with dead reckoning, with the goal of fusing it with a laser imaging, detection, and ranging (LiDAR) sensor-based simultaneous localization and mapping (SLAM) estimation for enhancing the performance. Convolution neural networks (CNNs), backward propagation algorithms, and gradient descent methods are implemented in the system to optimize the parameters in our framework. Furthermore, we develop a unique cost function for training the models to improve EKF accuracy. The proposed work is general and applicable to diverse IMU-aided robot localization models.


Assuntos
Robótica , Teorema de Bayes , Robótica/métodos , Algoritmos , Lasers , Atenção
6.
Comput Med Imaging Graph ; 99: 102090, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35709628

RESUMO

Accurate nerve identification is critical during surgical procedures to prevent damage to nerve tissues. Nerve injury can cause long-term adverse effects for patients, as well as financial overburden. Birefringence imaging is a noninvasive technique derived from polarized images that have successfully identified nerves that can assist during intraoperative surgery. Furthermore, birefringence images can be processed under 20 ms with a GPGPU implementation, making it a viable image modality option for real-time processing. In this study, we first comprehensively investigate the usage of birefringence images combined with deep learning, which can automatically detect nerves with gains upwards of 14% over its color image-based (RGB) counterparts on the F2 score. Additionally, we develop a deep learning network framework using the U-Net architecture with a Transformer based fusion module at the bottleneck that leverages both birefringence and RGB modalities. The dual-modality framework achieves 76.12 on the F2 score, a gain of 19.6 % over single-modality networks using only RGB images. By leveraging and extracting the feature maps of each modality independently and using each modality's information for cross-modal interactions, we aim to provide a solution that would further increase the effectiveness of imaging systems for enabling noninvasive intraoperative nerve identification.


Assuntos
Aprendizado Profundo , Tecido Nervoso , Humanos , Processamento de Imagem Assistida por Computador/métodos
8.
Sensors (Basel) ; 21(14)2021 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-34300651

RESUMO

Decades of scientific research have been conducted on developing and evaluating methods for automated emotion recognition. With exponentially growing technology, there is a wide range of emerging applications that require emotional state recognition of the user. This paper investigates a robust approach for multimodal emotion recognition during a conversation. Three separate models for audio, video and text modalities are structured and fine-tuned on the MELD. In this paper, a transformer-based crossmodality fusion with the EmbraceNet architecture is employed to estimate the emotion. The proposed multimodal network architecture can achieve up to 65% accuracy, which significantly surpasses any of the unimodal models. We provide multiple evaluation techniques applied to our work to show that our model is robust and can even outperform the state-of-the-art models on the MELD.


Assuntos
Emoções , Reconhecimento Psicológico , Comunicação , Modalidades de Fisioterapia
9.
Lasers Surg Med ; 53(10): 1427-1434, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34036583

RESUMO

BACKGROUND AND OBJECTIVES: Meticulous dissection and identification of nerves during head and neck surgery are crucial for preventing nerve damage. At present, nerve identification relies heavily on the surgeon's knowledge of anatomy, optionally combined with intraoperative neuromonitoring. Recently, optical techniques such as Mueller polarimetric imaging (MPI) have shown potential to improve nerve identification. STUDY DESIGN/MATERIALS AND METHODS: With institutional approval, seven 25-35 kg Yorkshire pigs underwent cervical incision in the central neck. Intraoperative images were obtained using our in-house MPI system. Birefringence maps from the MPI system were processed to quantify the values between 0 and 255 from different tissue types; an active contour model was applied to further improve nerve visualization on the corresponding color images. RESULTS: Among the seven pigs, the vagus nerves and recurrent laryngeal nerves were successfully differentiated with a mean intensity of 130.954 ± 20.611, which was significantly different (P < 0.05) from those of arteries (78.512 ± 27.78) and other surrounding tissues (82.583 ± 35.547). There were no imaging-related complications during the procedure. © 2021 Wiley Periodicals LLC. CONCLUSIONS: MPI is a potentially complementary intraoperative tool for nerve identification in adjacent tissues.


Assuntos
Nervo Laríngeo Recorrente , Tireoidectomia , Animais , Estudos de Viabilidade , Pescoço/diagnóstico por imagem , Pescoço/cirurgia , Suínos
10.
Int J Hum Comput Interact ; 37(3): 249-266, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33767571

RESUMO

Using robots in therapy for children on the autism spectrum is a promising avenue for child-robot interaction, and one that has garnered significant interest from the research community. After preliminary interviews with stakeholders and evaluating music selections, twelve typically developing (TD) children and three children with Autism Spectrum Disorder (ASD) participated in an experiment where they played the dance freeze game to four songs in partnership with either a NAO robot or a human partner. Overall, there were significant differences between TD children and children with ASD (e.g., mimicry, dance quality, & game play). There were mixed results for TD children, but they tended to show greater engagement with the researcher. However, objective results for children with ASD showed greater attention and engagement while dancing with the robot. There was little difference in game performance between partners or songs for either group. However, upbeat music did encourage greater movement than calm music. Using a robot in a musical dance game for children with ASD appears to show the advantages and potential just as in previous research efforts. Implications and future research are discussed with the results.

11.
J Pediatr Nurs ; 58: 65-75, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33360676

RESUMO

PROBLEM: Advances in technology have made robotics acceptable in healthcare and medical environments. The aim of this literature review was to examine how the pediatric population can benefit from robotic therapy and assistance that are currently available or being developed in diverse settings. ELIGIBILITY CRITERIA: English language full-text publications focusing on pediatric robotic therapy studies for infants and children under the age of 17 indexed in PubMed and CINAHL and published from 2008 to 2018. SAMPLE: A total of 272 articles were identified, 69 full-text articles were retrieved and assessed for eligibility, and 21 studies were finally used in the literature review. RESULTS: From 21 studies, all studies reviewed showed that children benefited from robotic therapies were 1) responsive to the therapies and 2) favored robot's presence since the robotic systems increased their attention and ability to participate in tasks. Due to small sample size, results were statistically inconclusive. CONCLUSIONS: We identified positive findings, where utilizing pediatric robots played vital roles in assisting and enhancing current pediatric and NICU treatments. Overall, our findings suggested that more clinical trials would be essential, but the uses of robots may contribute to the future advancement in pediatric and neonatal healthcare. IMPLICATIONS: These review and analysis can be used to inform healthcare environments where there is a room for applying robotic assistance, although most studies required further testing with larger sample size to validate their results. This suggests the need for further research for robotics in pediatric and neonatal healthcare.


Assuntos
Robótica , Criança , Previsões , Humanos , Lactente , Recém-Nascido
12.
Appl Sci (Basel) ; 10(3)2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35582331

RESUMO

This paper presents a method for extracting novel spectral features based on a sinusoidal model. The method is focused on characterizing the spectral shapes of audio signals using spectral peaks in frequency sub-bands. The extracted features are evaluated for predicting the levels of emotional dimensions, namely arousal and valence. Principal component regression, partial least squares regression, and deep convolutional neural network (CNN) models are used as prediction models for the levels of the emotional dimensions. The experimental results indicate that the proposed features include additional spectral information that common baseline features may not include. Since the quality of audio signals, especially timbre, plays a major role in affecting the perception of emotional valence in music, the inclusion of the presented features will contribute to decreasing the prediction error rate.

13.
Front Robot AI ; 7: 43, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33501211

RESUMO

Social engagement is a key indicator of an individual's socio-emotional and cognitive states. For a child with Autism Spectrum Disorder (ASD), this serves as an important factor in assessing the quality of the interactions and interventions. So far, qualitative measures of social engagement have been used extensively in research and in practice, but a reliable, objective, and quantitative measure is yet to be widely accepted and utilized. In this paper, we present our work on the development of a framework for the automated measurement of social engagement in children with ASD that can be utilized in real-world settings for the long-term clinical monitoring of a child's social behaviors as well as for the evaluation of the intervention methods being used. We present a computational modeling approach to derive the social engagement metric based on a user study with children between the ages of 4 and 12 years. The study was conducted within a child-robot interaction setting that targets sensory processing skills in children. We collected video, audio and motion-tracking data from the subjects and used them to generate personalized models of social engagement by training a multi-channel and multi-layer convolutional neural network. We then evaluated the performance of this network by comparing it with traditional classifiers and assessed its limitations, followed by discussions on the next steps toward finding a comprehensive and accurate metric for social engagement in ASD.

16.
Artigo em Inglês | MEDLINE | ID: mdl-33829148

RESUMO

The diagnosis of Autism Spectrum Disorder (ASD) in children is commonly accompanied by a diagnosis of sensory processing disorders. Abnormalities are usually reported in multiple sensory processing domains, showing a higher prevalence of unusual responses, particularly to tactile, auditory and visual stimuli. This paper discusses a novel robot-based framework designed to target sensory difficulties faced by children with ASD in a controlled setting. The setup consists of a number of sensory stations, together with two different robotic agents that navigate the stations and interact with the stimuli. These stimuli are designed to resemble real world scenarios that form a common part of one's everyday experiences. Given the strong interest of children with ASD in technology in general and robots in particular, we attempt to utilize our robotic platform to demonstrate socially acceptable responses to the stimuli in an interactive, pedagogical setting that encourages the child's social, motor and vocal skills, while providing a diverse sensory experience. A preliminary user study was conducted to evaluate the efficacy of the proposed framework, with a total of 18 participants (5 with ASD and 13 typically developing) between the ages of 4 and 12 years. We derive a measure of social engagement, based on which we evaluate the effectiveness of the robots and sensory stations in order to identify key design features that can improve social engagement in children.

17.
IEEE Robot Autom Mag ; 26(2): 40-48, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34887653

RESUMO

Based on years of research establishing the utility of socially assistive robots (SARs) for autism spectrum disorder (ASD) intervention, such robots have now become popular tools, widely used in special education schools, autism care centers, and clinical settings. Most previous studies have explored the roles of SARs as instructors, learning aides, and social-skills trainers, focusing on the learning, language, and social impairments associated with ASD. This article addresses aspects of empathy and emotion regulation (ER) impairments, which are important underlying factors for many atypicalities manifested in ASD. We discuss the design of our robot's emotional capabilities, its emotion-based action library, and the algorithm it uses to regulate a user's emotions. In addition, we describe a user study that evaluates the ER capabilities of an emotionally expressive empathetic agent as well as its capability to prime higher social engagement in a user.

18.
Appl Sci (Basel) ; 8(2)2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35582004

RESUMO

Imitation is a powerful component of communication between people, and it poses an important implication in improving the quality of interaction in the field of human-robot interaction (HRI). This paper discusses a novel framework designed to improve human-robot interaction through robotic imitation of a participant's gestures. In our experiment, a humanoid robotic agent socializes with and plays games with a participant. For the experimental group, the robot additionally imitates one of the participant's novel gestures during a play session. We hypothesize that the robot's use of imitation will increase the participant's openness towards engaging with the robot. Experimental results from a user study of 12 subjects show that post-imitation, experimental subjects displayed a more positive emotional state, had higher instances of mood contagion towards the robot, and interpreted the robot to have a higher level of autonomy than their control group counterparts did. These results point to an increased participant interest in engagement fueled by personalized imitation during interaction.

19.
Proc Hum Factors Ergon Soc Annu Meet ; 61(1): 808-812, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34880592

RESUMO

Experimenters need robots that are easier to control for experimental purposes. In this paper, we conducted interviews for eliciting interaction requirements for human-robot interaction scenarios. User input was then incorporated into an Android application for remotely controlling an Aldebaran Nao robot for use in Wizard-of-Oz experiments and demos. The app was used in a usability study to compare it with an existing Nao remote control app. Results were positive, highlighting the ease-of-use and organization of the app. Future work includes a more complete usability trial evaluating the unique functionality of the app, as well as a case study of the app in a real Wizard-of-Oz experiment.

20.
IEEE Trans Haptics ; 8(3): 327-38, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26219098

RESUMO

This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.


Assuntos
Apresentação de Dados , Museus , Robótica/métodos , Tecnologia Assistiva , Auxiliares Sensoriais , Tato/fisiologia , Transtornos da Visão/reabilitação , Algoritmos , Gráficos por Computador , Simulação por Computador , Feminino , Humanos , Masculino , Multimídia , Análise e Desempenho de Tarefas , Interface Usuário-Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA