Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 189
Filtrar
1.
J Robot Surg ; 18(1): 47, 2024 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-38244130

RESUMO

To collect validity evidence for the assessment of surgical competence through the classification of general surgical gestures for a simulated robot-assisted radical prostatectomy (RARP). We used 165 video recordings of novice and experienced RARP surgeons performing three parts of the RARP procedure on the RobotiX Mentor. We annotated the surgical tasks with different surgical gestures: dissection, hemostatic control, application of clips, needle handling, and suturing. The gestures were analyzed using idle time (periods with minimal instrument movements) and active time (whenever a surgical gesture was annotated). The distribution of surgical gestures was described using a one-dimensional heat map, snail tracks. All surgeons had a similar percentage of idle time but novices had longer phases of idle time (mean time: 21 vs. 15 s, p < 0.001). Novices used a higher total number of surgical gestures (number of phases: 45 vs. 35, p < 0.001) and each phase was longer compared with those of the experienced surgeons (mean time: 10 vs. 8 s, p < 0.001). There was a different pattern of gestures between novices and experienced surgeons as seen by a different distribution of the phases. General surgical gestures can be used to assess surgical competence in simulated RARP and can be displayed as a visual tool to show how performance is improving. The established pass/fail level may be used to ensure the competence of the residents before proceeding with supervised real-life surgery. The next step is to investigate if the developed tool can optimize automated feedback during simulator training.


Assuntos
Procedimentos Cirúrgicos Robóticos , Masculino , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Gestos , Competência Clínica , Próstata , Prostatectomia/métodos
2.
Int J Surg ; 110(3): 1441-1449, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38079605

RESUMO

BACKGROUND: Various surgical skills lead to differences in patient outcomes and identifying poorly skilled surgeons with constructive feedback contributes to surgical quality improvement. The aim of the study was to develop an algorithm for evaluating surgical skills in laparoscopic cholecystectomy based on the features of elementary functional surgical gestures (Surgestures). MATERIALS AND METHODS: Seventy-five laparoscopic cholecystectomy videos were collected from 33 surgeons in five hospitals. The phase of mobilization hepatocystic triangle and gallbladder dissection from the liver bed of each video were annotated with 14 Surgestures. The videos were grouped into competent and incompetent based on the quantiles of modified global operative assessment of laparoscopic skills (mGOALS). Surgeon-related information, clinical data, and intraoperative events were analyzed. Sixty-three Surgesture features were extracted to develop the surgical skill classification algorithm. The area under the receiver operating characteristic curve of the classification and the top features were evaluated. RESULTS: Correlation analysis revealed that most perioperative factors had no significant correlation with mGOALS scores. The incompetent group has a higher probability of cholecystic vascular injury compared to the competent group (30.8 vs 6.1%, P =0.004). The competent group demonstrated fewer inefficient Surgestures, lower shift frequency, and a larger dissection-exposure ratio of Surgestures during the procedure. The area under the receiver operating characteristic curve of the classification algorithm achieved 0.866. Different Surgesture features contributed variably to overall performance and specific skill items. CONCLUSION: The computer algorithm accurately classified surgeons with different skill levels using objective Surgesture features, adding insight into designing automatic laparoscopic surgical skill assessment tools with technical feedback.


Assuntos
Colecistectomia Laparoscópica , Laparoscopia , Humanos , Gestos , Laparoscopia/métodos , Colecistectomia Laparoscópica/métodos , Dissecação , Algoritmos , Competência Clínica
3.
Brain ; 147(1): 297-310, 2024 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-37715997

RESUMO

Despite human's praxis abilities are unique among primates, comparative observations suggest that these cognitive motor skills could have emerged from exploitation and adaptation of phylogenetically older building blocks, namely the parieto-frontal networks subserving prehension and manipulation. Within this framework, investigating to which extent praxis and prehension-manipulation overlap and diverge within parieto-frontal circuits could help in understanding how human cognition shapes hand actions. This issue has never been investigated by combining lesion mapping and direct electrophysiological approaches in neurosurgical patients. To this purpose, 79 right-handed left-brain tumour patient candidates for awake neurosurgery were selected based on inclusion criteria. First, a lesion mapping was performed in the early postoperative phase to localize the regions associated with an impairment in praxis (imitation of meaningless and meaningful intransitive gestures) and visuo-guided prehension (reaching-to-grasping) abilities. Then, lesion results were anatomically matched with intraoperatively identified cortical and white matter regions, whose direct electrical stimulation impaired the Hand Manipulation Task. The lesion mapping analysis showed that prehension and praxis impairments occurring in the early postoperative phase were associated with specific parietal sectors. Dorso-mesial parietal resections, including the superior parietal lobe and precuneus, affected prehension performance, while resections involving rostral intraparietal and inferior parietal areas affected praxis abilities (covariate clusters, 5000 permutations, cluster-level family-wise error correction P < 0.05). The dorsal bank of the rostral intraparietal sulcus was associated with both prehension and praxis (overlap of non-covariate clusters). Within praxis results, while resection involving inferior parietal areas affected mainly the imitation of meaningful gestures, resection involving intraparietal areas affected both meaningless and meaningful gesture imitation. In parallel, the intraoperative electrical stimulation of the rostral intraparietal and the adjacent inferior parietal lobe with their surrounding white matter during the hand manipulation task evoked different motor impairments, i.e. the arrest and clumsy patterns, respectively. When integrating lesion mapping and intraoperative stimulation results, it emerges that imitation of praxis gestures first depends on the integrity of parietal areas within the dorso-ventral stream. Among these areas, the rostral intraparietal and the inferior parietal area play distinct roles in praxis and sensorimotor process controlling manipulation. Due to its visuo-motor 'attitude', the rostral intraparietal sulcus, putative human homologue of monkey anterior intraparietal, might enable the visuo-motor conversion of the observed gesture (direct pathway). Moreover, its functional interaction with the adjacent, phylogenetic more recent, inferior parietal areas might contribute to integrate the semantic-conceptual knowledge (indirect pathway) within the sensorimotor workflow, contributing to the cognitive upgrade of hand actions.


Assuntos
Córtex Cerebral , Desempenho Psicomotor , Humanos , Desempenho Psicomotor/fisiologia , Filogenia , Lobo Parietal , Cognição , Mapeamento Encefálico , Imageamento por Ressonância Magnética , Gestos
4.
Artigo em Inglês | MEDLINE | ID: mdl-38083228

RESUMO

Wearable-based motion sensing solutions are capable of automatically detecting and tracking individual smoking puffs and/or episodes to aid the users in their journey of smoking cessation. But they are either obtrusive to use, perform with a low accuracy, or have questionable ability of running fully on a low-power device like a smartwatch, all affecting their widespread adoption. We propose 'CigTrak', a novel pipeline for an accurate smoking puff and episode detection using 6-DoF motion sensor on a smartwatch. A multi-stage method for puff detection is devised, comprising a novel kinematic analysis of puffing motion enabling temporal localization of puff. A Convolutional Neural Network (CNN)-backed model uses this candidate puff as an input instance by re-sampling it to required input size for the final decision. Clusters of detected puffs are further used to detect episodes. Data from 13 subjects was used for evaluating puff detection, and 9 subjects for evaluating episode detection. CigTrak achieved a high subject-independent performance for puff detection (F1-score 0.94) and free-living episode detection (F1-score 0.89), surpassing state of the art performance. CigTrak was also implemented fully online on two different smartwatches for testing a real-time puff detection.Clinical Relevance- Cigarette smoking affects physical & mental well-being of a person, and is the leading cause of preventable diseases, adversely affecting cardiac and respiratory systems. With many adults wanting to quit smoking [1], a reliable way of auto-journaling of smoking activities can greatly aid in cessation efforts through self-help, and reduce burden on healthcare industry. CigTrak, with its high accuracy in detecting smoking puffs and episodes, and capability of running fully on a smartwatch, can be readily used for this purpose.


Assuntos
Fumar Cigarros , Abandono do Hábito de Fumar , Adulto , Humanos , Gestos , Redes Neurais de Computação
6.
Sensors (Basel) ; 23(13)2023 Jul 07.
Artigo em Inglês | MEDLINE | ID: mdl-37448082

RESUMO

Surgical Instrument Signaling (SIS) is compounded by specific hand gestures used by the communication between the surgeon and surgical instrumentator. With SIS, the surgeon executes signals representing determined instruments in order to avoid error and communication failures. This work presented the feasibility of an SIS gesture recognition system using surface electromyographic (sEMG) signals acquired from the Myo armband, aiming to build a processing routine that aids telesurgery or robotic surgery applications. Unlike other works that use up to 10 gestures to represent and classify SIS gestures, a database with 14 selected gestures for SIS was recorded from 10 volunteers, with 30 repetitions per user. Segmentation, feature extraction, feature selection, and classification were performed, and several parameters were evaluated. These steps were performed by taking into account a wearable application, for which the complexity of pattern recognition algorithms is crucial. The system was tested offline and verified as to its contribution for all databases and each volunteer individually. An automatic segmentation algorithm was applied to identify the muscle activation; thus, 13 feature sets and 6 classifiers were tested. Moreover, 2 ensemble techniques aided in separating the sEMG signals into the 14 SIS gestures. Accuracy of 76% was obtained for the Support Vector Machine classifier for all databases and 88% for analyzing the volunteers individually. The system was demonstrated to be suitable for SIS gesture recognition using sEMG signals for wearable applications.


Assuntos
Gestos , Reconhecimento Automatizado de Padrão , Humanos , Eletromiografia/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Algoritmos , Instrumentos Cirúrgicos , Mãos
7.
Sci Rep ; 13(1): 7956, 2023 05 17.
Artigo em Inglês | MEDLINE | ID: mdl-37198179

RESUMO

Hand gesture recognition (HGR) based on electromyography signals (EMGs) and inertial measurement unit signals (IMUs) has been investigated for human-machine applications in the last few years. The information obtained from the HGR systems has the potential to be helpful to control machines such as video games, vehicles, and even robots. Therefore, the key idea of the HGR system is to identify the moment in which a hand gesture was performed and it's class. Several human-machine state-of-the-art approaches use supervised machine learning (ML) techniques for the HGR system. However, the use of reinforcement learning (RL) approaches to build HGR systems for human-machine interfaces is still an open problem. This work presents a reinforcement learning (RL) approach to classify EMG-IMU signals obtained using a Myo Armband sensor. For this, we create an agent based on the Deep Q-learning algorithm (DQN) to learn a policy from online experiences to classify EMG-IMU signals. The HGR proposed system accuracy reaches up to [Formula: see text] and [Formula: see text] for classification and recognition respectively, with an average inference time per window observation of 20 ms. and we also demonstrate that our method outperforms other approaches in the literature. Then, we test the HGR system to control two different robotic platforms. The first is a three-degrees-of-freedom (DOF) tandem helicopter test bench, and the second is a virtual six-degree-of-freedom (DOF) UR5 robot. We employ the designed hand gesture recognition (HGR) system and the inertial measurement unit (IMU) integrated into the Myo sensor to command and control the motion of both platforms. The movement of the helicopter test bench and the UR5 robot is controlled under a PID controller scheme. Experimental results show the effectiveness of using the proposed HGR system based on DQN for controlling both platforms with a fast and accurate response.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Gestos , Algoritmos , Extremidade Superior , Eletromiografia/métodos , Mãos
8.
Int J Comput Assist Radiol Surg ; 18(7): 1279-1285, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37253925

RESUMO

PURPOSE: This research aims to facilitate the use of state-of-the-art computer vision algorithms for the automated training of surgeons and the analysis of surgical footage. By estimating 2D hand poses, we model the movement of the practitioner's hands, and their interaction with surgical instruments, to study their potential benefit for surgical training. METHODS: We leverage pre-trained models on a publicly available hands dataset to create our own in-house dataset of 100 open surgery simulation videos with 2D hand poses. We also assess the ability of pose estimations to segment surgical videos into gestures and tool-usage segments and compare them to kinematic sensors and I3D features. Furthermore, we introduce 6 novel surgical dexterity proxies stemming from domain experts' training advice, all of which our framework can automatically detect given raw video footage. RESULTS: State-of-the-art gesture segmentation accuracy of 88.35% on the open surgery simulation dataset is achieved with the fusion of 2D poses and I3D features from multiple angles. The introduced surgical skill proxies presented significant differences for novices compared to experts and produced actionable feedback for improvement. CONCLUSION: This research demonstrates the benefit of pose estimations for open surgery by analyzing their effectiveness in gesture segmentation and skill assessment. Gesture segmentation using pose estimations achieved comparable results to physical sensors while being remote and markerless. Surgical dexterity proxies that rely on pose estimation proved they can be used to work toward automated training feedback. We hope our findings encourage additional collaboration on novel skill proxies to make surgical training more efficient.


Assuntos
Algoritmos , Mãos , Humanos , Retroalimentação , Mãos/cirurgia , Simulação por Computador , Movimento , Gestos
11.
Exp Brain Res ; 241(3): 743-752, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36720746

RESUMO

Human actions are suspect to various compatibility phenomena. For example, responding is faster to the side where a stimulus appears than to the opposite side, referred to as stimulus-response (S-R) compatibility. This is even true, if the response is given to a different stimulus feature, while location itself is irrelevant (Simon compatibility). In addition, responses typically produce perceivable effects on the environment. If they do so in a predictable way, responses are faster if they produce a (e.g., spatially) compatible effect on the same side than on the other side. That it, a left response is produced faster if it results predictably in a left effect than in a right effect. This effect is called response-effect (R-E) compatibility. Finally, compatibility could also exist between stimuli and the effects, which is accordingly called stimulus-effect (S-E) compatibility. Such compatibility phenomena are also relevant for applied purposes, be it in laparoscopic surgery or aviation. The present study investigates Simon and R-E compatibility for touchless gesture interactions. In line with a recent study, no effect of R-E compatibility was observed, yet irrelevant stimulus location yielded a large Simon effect. Touchless gestures thus seem to behave differently with regard to compatibility phenomena than interactions via (other) tools such as levers.


Assuntos
Gestos , Desempenho Psicomotor , Humanos , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia
14.
Int J Comput Assist Radiol Surg ; 18(5): 909-919, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36418763

RESUMO

BACKGROUND: Virtual reality (VR) technology is an ideal alternative of operation training and surgical teaching. However, virtual surgery is usually carried out using the mouse or data gloves, which affects the authenticity of virtual operation. A virtual surgery system with gesture recognition and real-time image feedback was explored to realize more authentic immersion. METHOD: Gesture recognition technology proposed with an efficient and real-time algorithm and high fidelity was explored. The recognition of hand contour, palm and fingertip was firstly realized by hand data extraction. Then, an Support Vector Machine classifier was utilized to classify and recognize common gestures after extraction of feature recognition. The algorithm of collision detection adopted Axis Aligned Bounding Box binary tree to build hand and scalpel collision models. What's more, nominal radius theorem (NRT) and separating axis theorem (SAT) were applied for speeding up collision detection. Based on the maxillofacial virtual surgical system we proposed before, the feasibility of integration of the above technologies in this prototype system was evaluated. RESULTS: Ten kinds of signal static gestures were designed to test gesture recognition algorithms. The accuracy of gestures recognition is more than 80%, some of which were over 90%. The generation speed of collision detection model met the software requirements with the method of NRT and SAT. The response time of gesture] recognition was less than 40 ms, namely the speed of hand gesture recognition system was greater than 25 Hz. On the condition of integration of hand gesture recognition, typical virtual surgical procedures including grabbing a scalpel, puncture site selection, virtual puncture operation and incision were carried out with realization of real-time image feedback. CONCLUSION: Based on the previous maxillofacial virtual surgical system that consisted of VR, triangular mesh collision detection and maxillofacial biomechanical model construction, the integration of hand gesture recognition was a feasible method to improve the interactivity and immersion of virtual surgical operation training.


Assuntos
Gestos , Cirurgia Bucal , Animais , Camundongos , Algoritmos , Software , Interface Usuário-Computador , Mãos
15.
Int J Comput Assist Radiol Surg ; 18(8): 1429-1436, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36565368

RESUMO

PURPOSE: Past research contained the investigation and development of robotic ultrasound. In this context, interfaces which allow for interaction with the robotic system are of paramount importance. Few researchers have addressed the issue of developing non-tactile interaction approaches, although they could be beneficial for maintaining sterility during medical procedures. Interaction could be supported by multimodality, which has the potential to enable intuitive and natural interaction. To assess the feasibility of multimodal interaction for non-tactile control of a co-located robotic ultrasound system, a novel human-robot interaction concept was developed. METHODS: The medical use case of needle-based interventions under hybrid computed tomography and ultrasound imaging was analyzed by interviewing four radiologists. From the resulting workflow, interaction tasks were derived which include human-robot interaction. Based on this, characteristics of a multimodal, touchless human-robot interface were elaborated, suitable interaction modalities were identified, and a corresponding interface was developed, which was thereafter evaluated in a user study with eight participants. RESULTS: The implemented interface includes voice commands, combined with hand gesture control for discrete control and navigation interaction of the robotic US probe, respectively. The interaction concept was evaluated by the users in the form of a quantitative questionnaire with a average usability. Qualitative analysis of interview results revealed user satisfaction with the implemented interaction methods and potential improvements to the system. CONCLUSION: A multimodal, touchless interaction concept for a robotic US for the use case of needle-based procedures in interventional radiology was developed, incorporating combined voice and hand gesture control. Future steps will include the integration of a solution for the missing haptic feedback and the evaluation of its clinical suitability.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Interface Usuário-Computador , Gestos , Ultrassonografia
16.
Soft Robot ; 10(3): 580-589, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36459109

RESUMO

Soft robotic hands are inherently safer and more compliant in robot-environment interaction than rigid manipulators, but their flexibility and versatility still need improving. In this article, a gesture adaptive soft-rigid robotic hand is proposed. The robotic hand has three pneumatic two-segment fingers. Each finger segment is driven independently for flexible gesture adjustment to match up with different object shapes. The palm is constructed by a rigid skeleton driven by a soft pneumatic spring. It provides a firm support, large workspace, and independent force control for the fingers. Geometry model of the robotic hand is established, based on which a grasping gesture optimization algorithm is adopted. The fingers achieve optimal contact with objects by performing maximal curving similarity with the object outlines. Experiment shows that the soft-rigid robotic hand provides adaptive and reliable grasping for objects of different sizes, shapes, and materials with optimized gestures.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Gestos , Mãos , Dedos
18.
J Robot Surg ; 17(2): 597-603, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36149590

RESUMO

Our group previously defined a dissection gesture classification system that deconstructs robotic tissue dissection into its most elemental yet meaningful movements. The purpose of this study was to expand upon this framework by adding an assessment of gesture efficacy (ineffective, effective, or erroneous) and analyze dissection patterns between groups of surgeons of varying experience. We defined three possible gesture efficacies as ineffective (no meaningful effect on the tissue), effective (intended effect on the tissue), and erroneous (unintended disruption of the tissue). Novices (0 prior robotic cases), intermediates (1-99 cases), and experts (≥ 100 cases) completed a robotic dissection task in a dry-lab training environment. Video recordings were reviewed to classify each gesture and determine its efficacy, then dissection patterns between groups were analyzed. 23 participants completed the task, with 9 novices, 8 intermediates with median caseload 60 (IQR 41-80), and 6 experts with median caseload 525 (IQR 413-900). For gesture selection, we found increasing experience associated with increasing proportion of overall dissection gestures (p = 0.009) and decreasing proportion of retraction gestures (p = 0.009). For gesture efficacy, novices performed the greatest proportion of ineffective gestures (9.8%, p < 0.001), intermediates commit the greatest proportion of erroneous gestures (26.8%, p < 0.001), and the three groups performed similar proportions of overall effective gestures, though experts performed the greatest proportion of effective retraction gestures (85.6%, p < 0.001). Between groups of experience, we found significant differences in gesture selection and gesture efficacy. These relationships may provide insight into further improving surgical training.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Gestos , Movimento
19.
Adv Mater ; 34(35): e2204355, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35817476

RESUMO

Noncontact interactive technology provides an intelligent solution to mitigate public health risks from cross-infection in the era of COVID-19. The utilization of human radiation as a stimulus source is conducive to the implementation of low-power, robust noncontact human-machine interaction. However, the low radiation intensity emitted by humans puts forward a high demand for photodetection performance. Here, a SrTiO3-x /CuNi-heterostructure-based thermopile is constructed, which features the combination of high thermoelectric performance and near-unity long-wave infrared absorption, to realize the self-powered detection of human radiation. The response level of this thermopile to human radiation is orders of magnitude higher than those of low-dimensional-materials-based photothermoelectric detectors and even commercial thermopiles. Furthermore, a touchless input device based on the thermopile array is developed, which can recognize hand gestures, numbers, and letters in real-time. This work offers a reliable strategy to integrate the spontaneous human radiation into noncontact human-machine interaction systems.


Assuntos
COVID-19 , Gestos , Humanos , Luz
20.
Sci Rep ; 12(1): 6950, 2022 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-35680934

RESUMO

The dog (Canis familiaris) was the first domesticated animal and hundreds of breeds exist today. During domestication, dogs experienced strong selection for temperament, behaviour, and cognitive ability. However, the genetic basis of these abilities is not well-understood. We focused on ancient dog breeds to investigate breed-related differences in social cognitive abilities. In a problem-solving task, ancient breeds showed a lower tendency to look back at humans than other European breeds. In a two-way object choice task, they showed no differences in correct response rate or ability to read human communicative gestures. We examined gene polymorphisms in oxytocin, oxytocin receptor, melanocortin 2 receptor, and a Williams-Beuren syndrome-related gene (WBSCR17), as candidate genes of dog domestication. The single-nucleotide polymorphisms on melanocortin 2 receptor were related to both tasks, while other polymorphisms were associated with the unsolvable task. This indicates that glucocorticoid functions are involved in the cognitive skills acquired during dog domestication.


Assuntos
Cães , Domesticação , Interação Humano-Animal , Animais , Animais Domésticos , Comportamento Animal/fisiologia , Comunicação , Cães/genética , Gestos , Humanos , N-Acetilgalactosaminiltransferases/genética , Ocitocina , Polimorfismo de Nucleotídeo Único , Receptor Tipo 2 de Melanocortina/genética , Receptores de Ocitocina/genética , Polipeptídeo N-Acetilgalactosaminiltransferase
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA