RESUMO
Teleoperated medical technologies are a fundamental part of the healthcare system. From telemedicine to remote surgery, they allow remote diagnosis and treatment. However, the absence of any interface able to effectively reproduce the sense of touch and interaction with the patient prevents the implementation of teleoperated systems for primary care examinations, such as palpation. In this paper, we propose the first reported case of a soft robotic bilateral physical twin for remote palpation. By creating an entirely soft interface that can be used both to control the robot and receive feedback, the proposed device allows the user to achieve remote palpation by simply palpating the soft physical twin. This is achieved through a compact design showcasing 9 pneumatic chambers and exploiting multi-silicone casting to minimize cross-noise and allow teleoperation. A comparative study has been run against a traditional setup, and both the control and feedback of the physical twin are carefully analyzed. Despite distributed tactile feedback not achieving the same performance as the visual map, the soft control and visual feedback combination showcases a 5.1% higher accuracy. Moreover, the bilateral soft physical twin results always in a less invasive procedure, with 41% lower mechanical work exchanged with the remote phantom.
Assuntos
Robótica , Silicones , Humanos , Desenho de Equipamento , Retroalimentação , Palpação , Robótica/métodos , Tato , Interface Usuário-ComputadorRESUMO
Abdominal palpation is one of the basic but important physical examination methods used by physicians. Visual, auditory, and haptic feedback from the patients are known to be the main sources of feedback they use in the diagnosis. However, learning to interpret this feedback and making accurate diagnosis require several years of training. Many abdominal palpation training simulators have been proposed to date, but very limited attempts have been reported in integrating vocal pain expressions into physical abdominal palpation simulators. Here, we present a vocal pain expression augmentation for a robopatient. The proposed robopatient is capable of providing real-time facial and vocal pain expressions based on the exerted palpation force and position on the abdominal phantom of the robopatient. A pilot study is conducted to test the proposed system, and we show the potential of integrating vocal pain expressions to the robopatient. The platform has also been tested by two clinical experts with prior experience in abdominal palpation. Their evaluations on functionality and suggestions for improvements are presented. We highlight the advantages of the proposed robopatient with real-time vocal and facial pain expressions as a controllable simulator platform for abdominal palpation training studies. Finally, we discuss the limitations of the proposed approach and suggest several future directions for improvements.
RESUMO
Electronic skins (e-skins) aim to replicate the capabilities of human skin by integrating electronic components and advanced materials into a flexible, thin, and stretchable substrate. Electrical impedance tomography (EIT) has recently been adopted in the area of e-skin thanks to its robustness and simplicity of fabrication compared to previous methods. However, the most common EIT configurations have limitations in terms of low sensitivities in areas far from the electrodes. Here we combine two piezoresistive materials with different conductivities and charge carriers, creating anisotropy in the sensitive part of the e-skin. The bottom layer consists of an ionically conducting hydrogel, while the top layer is a self-healing composite that conducts electrons through a percolating carbon black network. By changing the pattern of the top layer, the resulting distribution of currents in the e-skin can be tuned to locally adapt the sensitivity. This approach can be used to biomimetically adjust the sensitivities of different regions of the skin. It was demonstrated how the sensitivity increased by 500% and the localization error reduced by 40% compared to the homogeneous case, eliminating the lower sensitivity regions. This principle enables integrating the various sensing capabilities of our skins into complex 3D geometries. In addition, both layers of the developed e-skin have self-healing capabilities, showing no statistically significant difference in localization performance before the damage and after healing. The self-healing bilayer e-skin could recover full sensing capabilities after healing of severe damage.
Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Impedância Elétrica , Condutividade Elétrica , Eletrônica , TomografiaRESUMO
The mechanical properties of a sensor strongly affect its tactile sensing capabilities. By exploiting tactile filters, mechanical structures between the sensing unit and the environment, it is possible to tune the interaction dynamics with the surrounding environment. But how can we design a good tactile filter? Previously, the role of filters' geometry and stiffness on the quality of the tactile data has been the subject of several studies, both implementing static filters and adaptable filters. State-of-the-art works on online adaptive stiffness highlight a crucial role of the filters' mechanical behavior in the structure of the recorded tactile data. However, the relationship between the filter's and the environment's characteristics is still largely unknown. We want to show the effect of the environment's mechanical properties on the structure of the acquired tactile data and the performance of a classification task while testing a wide range of static tactile filters. Moreover, we fabricated the filters using four materials commonly exploited in soft robotics, to merge the gap between tactile sensing and robotic applications. We collected data from the interaction with a standard set of twelve objects of different materials, shapes, and textures, and we analyzed the effect of the filter's material on the structure of such data and the performance of nine common machine learning classifiers, both considering the overall test set and the three individual subsets made by all objects of the same material. We showed that depending on the material of the test objects, there is a drastic change in the performance of the four tested filters, and that the filter that matches the mechanical properties of the environment always outperforms the others.
RESUMO
Realtime visual feedback from consequences of actions is useful for future safety-critical human-robot interaction applications such as remote physical examination of patients. Given multiple formats to present visual feedback, using face as feedback for mediating human-robot interaction in remote examination remains understudied. Here we describe a face mediated human-robot interaction approach for remote palpation. It builds upon a robodoctor-robopatient platform where user can palpate on the robopatient to remotely control the robodoctor to diagnose a patient. A tactile sensor array mounted on the end effector of the robodoctor measures the haptic response of the patient under diagnosis and transfers it to the robopatient to render pain facial expressions in response to palpation forces. We compare this approach against a direct presentation of tactile sensor data in a visual tactile map. As feedback, the former has the advantage of recruiting advanced human capabilities to decode expressions on a human face whereas the later has the advantage of being able to present details such as intensity and spatial information of palpation. In a user study, we compare these two approaches in a teleoperated palpation task to find the hard nodule embedded in the remote abdominal phantom. We show that the face mediated human-robot interaction approach leads to statistically significant improvements in localizing the hard nodule without compromising the nodule position estimation time. We highlight the inherent power of facial expressions as communicative signals to enhance the utility and effectiveness of human-robot interaction in remote medical examinations.
Assuntos
Robótica , Retroalimentação , Retroalimentação Sensorial , Humanos , Palpação , Tato/fisiologiaRESUMO
Communication delay represents a fundamental challenge in telerobotics: on one hand, it compromises the stability of teleoperated robots, on the other hand, it decreases the user's awareness of the designated task. In scientific literature, such a problem has been addressed both with statistical models and neural networks (NN) to perform sensor prediction, while keeping the user in full control of the robot's motion. We propose shared control as a tool to compensate and mitigate the effects of communication delay. Shared control has been proven to enhance precision and speed in reaching and manipulation tasks, especially in the medical and surgical fields. We analyse the effects of added delay and propose a unilateral teleoperated leader-follower architecture that both implements a predictive system and shared control, in a 1-dimensional reaching and recognition task with haptic sensing. We propose four different control modalities of increasing autonomy: non-predictive human control (HC), predictive human control (PHC), (shared) predictive human-robot control (PHRC), and predictive robot control (PRC). When analyzing how the added delay affects the subjects' performance, the results show that the HC is very sensitive to the delay: users are not able to stop at the desired position and trajectories exhibit wide oscillations. The degree of autonomy introduced is shown to be effective in decreasing the total time requested to accomplish the task. Furthermore, we provide a deep analysis of environmental interaction forces and performed trajectories. Overall, the shared control modality, PHRC, represents a good trade-off, having peak performance in accuracy and task time, a good reaching speed, and a moderate contact with the object of interest.