Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38598406

RESUMEN

Autonomous Ultrasound Image Quality Assessment (US-IQA) is a promising tool to aid the interpretation by practicing sonographers and to enable the future robotization of ultrasound procedures. However, autonomous US-IQA has several challenges. Ultrasound images contain many spurious artifacts, such as noise due to handheld probe positioning, errors in the selection of probe parameters and patient respiration during the procedure. Further, these images are highly variable in appearance with respect to the individual patient's physiology. We propose to use a deep Convolutional Neural Network (CNN), USQNet, which utilizes a Multi-scale and Local-to-Global Second-order Pooling (MS-L2GSoP) classifier to conduct the sonographer-like assessment of image quality. This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations, similar to a sonographer's understanding of anatomy. Then, it uses second-order pooling in the intermediate layers (local) and at the end of the network (global) to exploit the second-order statistical dependency of multi-scale structural and multi-region textural features. The L2GSoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches, much like a sonographer prioritizes regions across the image. We experimentally validated the USQNet for a new dataset of the human urinary bladder ultrasound images. The validation involved first with the subjective assessment by experienced radiologists' annotation, and then with state-of-the-art CNN networks for US-IQA and its ablated counterparts. The results demonstrate that USQNet achieves a remarkable accuracy of 92.4% and outperforms the SOTA models by 3 - 14% while requiring comparable computation time.

2.
IEEE Trans Artif Intell ; 4(6): 1472-1483, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38090475

RESUMEN

Zero-shot learning (ZSL) is a paradigm in transfer learning that aims to recognize unknown categories by having a mere description of them. The problem of ZSL has been thoroughly studied in the domain of static object recognition, however, ZSL for dynamic events (ZSER) such as activities and gestures has hardly been investigated. In this context, this paper addresses ZSER by relying on semantic attributes of events to transfer the learned knowledge from seen classes to unseen ones. First, we utilized the Amazon Mechanical Turk platform to create the first attribute-based gesture dataset, referred to as ZSGL, comprising the categories present in MSRC and Italian gesture datasets. Overall, our ZSGL dataset consisted of 26 categories, 65 discriminative attributes, and 16 attribute annotations and 400 examples per category. We used trainable recurrent networks and 3D CNNs to learn the spatio-temporal features. Next, we propose a simple yet effective end-to-end approach for ZSER, referred to as Joint Sequential Semantic Encoder (JSSE), to explore temporal patterns, to efficiently represent events in the latent space, and to simultaneously optimize for both the semantic and classification tasks. We evaluate our model on ZSGL and two action datasets (UCF and HMDB), and compared the performance of JSSE against several existing baselines in four experimental conditions: 1. Within-category, 2. Across-category, 3. Closed-set, and 4. Open-Set. Results show that JSSE considerably outperforms (p<0.05) other approaches and performs favorably for both the datasets in all experimental conditions.

3.
Mil Med ; 188(Suppl 6): 412-419, 2023 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-37948233

RESUMEN

INTRODUCTION: Remote military operations require rapid response times for effective relief and critical care. Yet, the military theater is under austere conditions, so communication links are unreliable and subject to physical and virtual attacks and degradation at unpredictable times. Immediate medical care at these austere locations requires semi-autonomous teleoperated systems, which enable the completion of medical procedures even under interrupted networks while isolating the medics from the dangers of the battlefield. However, to achieve autonomy for complex surgical and critical care procedures, robots require extensive programming or massive libraries of surgical skill demonstrations to learn effective policies using machine learning algorithms. Although such datasets are achievable for simple tasks, providing a large number of demonstrations for surgical maneuvers is not practical. This article presents a method for learning from demonstration, combining knowledge from demonstrations to eliminate reward shaping in reinforcement learning (RL). In addition to reducing the data required for training, the self-supervised nature of RL, in conjunction with expert knowledge-driven rewards, produces more generalizable policies tolerant to dynamic environment changes. A multimodal representation for interaction enables learning complex contact-rich surgical maneuvers. The effectiveness of the approach is shown using the cricothyroidotomy task, as it is a standard procedure seen in critical care to open the airway. In addition, we also provide a method for segmenting the teleoperator's demonstration into subtasks and classifying the subtasks using sequence modeling. MATERIALS AND METHODS: A database of demonstrations for the cricothyroidotomy task was collected, comprising six fundamental maneuvers referred to as surgemes. The dataset was collected by teleoperating a collaborative robotic platform-SuperBaxter, with modified surgical grippers. Then, two learning models are developed for processing the dataset-one for automatic segmentation of the task demonstrations into a sequence of surgemes and the second for classifying each segment into labeled surgemes. Finally, a multimodal off-policy RL with rewards learned from demonstrations was developed to learn the surgeme execution from these demonstrations. RESULTS: The task segmentation model has an accuracy of 98.2%. The surgeme classification model using the proposed interaction features achieved a classification accuracy of 96.25% averaged across all surgemes compared to 87.08% without these features and 85.4% using a support vector machine classifier. Finally, the robot execution achieved a task success rate of 93.5% compared to baselines of behavioral cloning (78.3%) and a twin-delayed deep deterministic policy gradient with shaped rewards (82.6%). CONCLUSIONS: Results indicate that the proposed interaction features for the segmentation and classification of surgical tasks improve classification accuracy. The proposed method for learning surgemes from demonstrations exceeds popular methods for skill learning. The effectiveness of the proposed approach demonstrates the potential for future remote telemedicine on battlefields.


Asunto(s)
Robótica , Cirugía Asistida por Computador , Humanos , Robótica/métodos , Algoritmos , Cirugía Asistida por Computador/métodos , Aprendizaje Automático
4.
Mil Med ; 188(Suppl 6): 208-214, 2023 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-37948255

RESUMEN

INTRODUCTION: U.S. Military healthcare providers increasingly perform prolonged casualty care because of operations in settings with prolonged evacuation times. Varied training and experience mean that this care may fall to providers unfamiliar with providing critical care. Telemedicine tools with audiovisual capabilities, artificial intelligence (AI), and augmented reality (AR) can enhance inexperienced personnel's competence and confidence when providing prolonged casualty care. Furthermore, implementing offline functionality provides assistance options in communications-limited settings. The intent of the Trauma TeleHelper for Operational Medical Procedure Support and Offline Network (THOMPSON) is to develop (1) a voice-controlled mobile application with video references for procedural guidance, (2) audio narration of each video using procedure mentoring scripts, and (3) an AI-guided intervention system using AR overlay and voice command to create immersive video modeling. These capabilities will be available offline and in downloadable format. MATERIALS AND METHODS: The Trauma THOMPSON platform is in development. Focus groups of subject matter experts will identify appropriate procedures and best practices. Procedural video recordings will be collected to develop reference materials for the Trauma THOMPSON mobile application and to train a machine learning algorithm on action recognition and anticipation. Finally, an efficacy evaluation of the application will be conducted in a simulated environment. RESULTS: Preliminary video collection has been initiated for tube thoracostomy, needle decompression, cricothyrotomy, intraosseous access, and tourniquet application. Initial results from the machine learning algorithm show action recognition and anticipation accuracies of 20.1% and 11.4%, respectively, in unscripted datasets "in the wild," notably on a limited dataset. This system performs over 100 times better than a random prediction. CONCLUSIONS: Developing a platform to provide real-time, offline support will deliver the benefits of synchronous expert advice within communications-limited and remote environments. Trauma THOMPSON has the potential to fill an important gap for clinical decision support tools in these settings.


Asunto(s)
Realidad Aumentada , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Inteligencia Artificial , Comunicación , Algoritmos
5.
Mil Med ; 188(Suppl 6): 480-487, 2023 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-37948270

RESUMEN

INTRODUCTION: Increased complexity in robotic-assisted surgical system interfaces introduces problems with human-robot collaboration that result in excessive mental workload (MWL), adversely impacting a surgeon's task performance and increasing error probability. Real-time monitoring of the operator's MWL will aid in identifying when and how interventions can be best provided to moderate MWL. In this study, an MWL-based adaptive automation system is constructed and evaluated for its effectiveness during robotic-assisted surgery. MATERIALS AND METHODS: This study recruited 10 participants first to perform surgical tasks under different cognitive workload levels. Physiological signals were obtained and employed to build a real-time system for cognitive workload monitoring. To evaluate the effectiveness of the proposed system, 15 participants were recruited to perform the surgical task with and without the proposed system. The participants' task performance and perceived workload were collected and compared. RESULTS: The proposed neural network model achieved an accuracy of 77.9% in cognitive workload classification. In addition, better task performance and lower perceived workload were observed when participants completed the experimental task under the task condition supplemented with adaptive aiding using the proposed system. CONCLUSIONS: The proposed MWL monitoring system successfully diminished the perceived workload of participants and increased their task performance under high-stress conditions via interventions by a semi-autonomous suction tool. The preliminary results from the comparative study show the potential impact of automated adaptive aiding systems in enhancing surgical task performance via cognitive workload-triggered interventions in robotic-assisted surgery.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Humanos , Análisis y Desempeño de Tareas , Carga de Trabajo , Automatización
6.
Mil Med ; 188(Suppl 6): 674-681, 2023 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-37948279

RESUMEN

INTRODUCTION: Between 5% and 20% of all combat-related casualties are attributed to burn wounds. A decrease in the mortality rate of burns by about 36% can be achieved with early treatment, but this is contingent upon accurate characterization of the burn. Precise burn injury classification is recognized as a crucial aspect of the medical artificial intelligence (AI) field. An autonomous AI system designed to analyze multiple characteristics of burns using modalities including ultrasound and RGB images is described. MATERIALS AND METHODS: A two-part dataset is created for the training and validation of the AI: in vivo B-mode ultrasound scans collected from porcine subjects (10,085 frames), and RGB images manually collected from web sources (338 images). The framework in use leverages an explanation system to corroborate and integrate burn expert's knowledge, suggesting new features and ensuring the validity of the model. Through the utilization of this framework, it is discovered that B-mode ultrasound classifiers can be enhanced by supplying textural features. More specifically, it is confirmed that statistical texture features extracted from ultrasound frames can increase the accuracy of the burn depth classifier. RESULTS: The system, with all included features selected using explainable AI, is capable of classifying burn depth with accuracy and F1 average above 80%. Additionally, the segmentation module has been found capable of segmenting with a mean global accuracy greater than 84%, and a mean intersection-over-union score over 0.74. CONCLUSIONS: This work demonstrates the feasibility of accurate and automated burn characterization for AI and indicates that these systems can be improved with additional features when a human expert is combined with explainable AI. This is demonstrated on real data (human for segmentation and porcine for depth classification) and establishes the groundwork for further deep-learning thrusts in the area of burn analysis.


Asunto(s)
Inteligencia Artificial , Quemaduras , Humanos , Porcinos , Animales , Ultrasonografía
7.
Can J Surg ; 66(6): E522-E534, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37914210

RESUMEN

People suffering from critical injuries/illness face marked challenges before transportation to definitive care. Solutions to diagnose and intervene in the prehospital setting are required to improve outcomes. Despite advances in artificial intelligence and robotics, near-term practical interventions for catastrophic injuries/illness will require humans to perform unfamiliar, uncomfortable and risky interventions. Development of posttraumatic stress disorder is already disproportionately high among first responders and correlates with uncertainty and doubts concerning decisions, actions and inactions. Technologies such as remote telementoring (RTM) may enable such interventions and will hopefully decrease potential stress for first responders. How thought processes may be remotely assisted using RTM and other technologies should be studied urgently. We need to understand if the use of cognitively offloading technologies such as RTM will alleviate, or at least not exacerbate, the psychological stresses currently disabling first responders.


Asunto(s)
Inteligencia Artificial , Servicios Médicos de Urgencia , Humanos , Cognición
8.
Annu Rev Biomed Eng ; 2023 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-37832939

RESUMEN

Assistive technologies (AT) enable people with disabilities to perform activities of daily living more independently, have greater access to community and healthcare services, and be more productive performing educational and/or employment tasks. Integrating artificial intelligence (AI) with various agents, including electronics, robotics, and software, has revolutionized AT, resulting in groundbreaking technologies such as mind-controlled exoskeletons, bionic limbs, intelligent wheelchairs, and smart home assistants. This article provides a review of various AI techniques that have helped those with physical disabilities, including brain-computer interfaces, computer vision, natural language processing, and human-computer interaction. The current challenges and future directions for AI-powered advanced technologies are also addressed. Expected final online publication date for the Annual Review of Biomedical Engineering, Volume 26 is May 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

9.
Sensors (Basel) ; 23(9)2023 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-37177557

RESUMEN

Previous studies in robotic-assisted surgery (RAS) have studied cognitive workload by modulating surgical task difficulty, and many of these studies have relied on self-reported workload measurements. However, contributors to and their effects on cognitive workload are complex and may not be sufficiently summarized by changes in task difficulty alone. This study aims to understand how multi-task requirement contributes to the prediction of cognitive load in RAS under different task difficulties. Multimodal physiological signals (EEG, eye-tracking, HRV) were collected as university students performed simulated RAS tasks consisting of two types of surgical task difficulty under three different multi-task requirement levels. EEG spectral analysis was sensitive enough to distinguish the degree of cognitive workload under both surgical conditions (surgical task difficulty/multi-task requirement). In addition, eye-tracking measurements showed differences under both conditions, but significant differences of HRV were observed in only multi-task requirement conditions. Multimodal-based neural network models have achieved up to 79% accuracy for both surgical conditions.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Humanos , Análisis y Desempeño de Tareas , Carga de Trabajo/psicología , Autoinforme , Redes Neurales de la Computación
10.
Hum Factors ; 65(5): 737-758, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-33241945

RESUMEN

OBJECTIVE: The goal of this systematic literature review is to investigate the relationship between indirect physiological measurements and direct measures of situation awareness (SA). BACKGROUND: Across different environments and tasks, assessments of SA are often performed using techniques designed specifically to directly measure SA, such as SAGAT, SPAM, and/or SART. However, research suggests that indirect physiological sensing methods may also be capable of predicting SA. Currently, it is unclear which particular physiological approaches are sensitive to changes in SA. METHOD: Seven databases were searched using the PRISMA reporting guidelines. Eligibility criteria included human-subject experiments that used at least one direct SA assessment technique, as well as at least one physiological measurement. Information extracted from each article was the physiological metric(s), the direct SA measurement(s), the correlation between these two metrics, and the experimental task(s). All studies underwent a quality assessment. RESULTS: Twenty-five articles were included in this review. Eye tracking techniques were the most commonly used physiological measures, and correlations between conscious aspects of eye movement measures and direct SA scores were observed. Evidence for cardiovascular predictors of SA were mixed. EEG studies were too few to form strong conclusions, but were consistently positive. CONCLUSION: Further investigation is needed to methodically collect more relevant data and comprehensively model the relationships between a wider range of physiological measurements and direct assessments of SA. APPLICATION: This review will guide researchers and practitioners in methods to indirectly assess SA with sensors and highlight opportunities for future research on wearables and SA.


Asunto(s)
Concienciación , Movimientos Oculares , Humanos , Concienciación/fisiología , Reproducibilidad de los Resultados , Predicción
11.
Hum Factors ; : 187208221129940, 2022 Nov 11.
Artículo en Inglés | MEDLINE | ID: mdl-36367971

RESUMEN

OBJECTIVE: This study developed and evaluated a mental workload-based adaptive automation (MWL-AA) that monitors surgeon cognitive load and assist during cognitively demanding tasks and assists surgeons in robotic-assisted surgery (RAS). BACKGROUND: The introduction of RAS makes operators overwhelmed. The need for precise, continuous assessment of human mental workload (MWL) states is important to identify when the interventions should be delivered to moderate operators' MWL. METHOD: The MWL-AA presented in this study was a semi-autonomous suction tool. The first experiment recruited ten participants to perform surgical tasks under different MWL levels. The physiological responses were captured and used to develop a real-time multi-sensing model for MWL detection. The second experiment evaluated the effectiveness of the MWL-AA, where nine brand-new surgical trainees performed the surgical task with and without the MWL-AA. Mixed effect models were used to compare task performance, objective- and subjective-measured MWL. RESULTS: The proposed system predicted high MWL hemorrhage conditions with an accuracy of 77.9%. For the MWL-AA evaluation, the surgeons' gaze behaviors and brain activities suggested lower perceived MWL with MWL-AA than without. This was further supported by lower self-reported MWL and better task performance in the task condition with MWL-AA. CONCLUSION: A MWL-AA systems can reduce surgeons' workload and improve performance in a high-stress hemorrhaging scenario. Findings highlight the potential of utilizing MWL-AA to enhance the collaboration between the autonomous system and surgeons. Developing a robust and personalized MWL-AA is the first step that can be used do develop additional use cases in future studies. APPLICATION: The proposed framework can be expanded and applied to more complex environments to improve human-robot collaboration.

12.
Isr Med Assoc J ; 24(9): 596-601, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36168179

RESUMEN

BACKGROUND: Handheld ultrasound devices present an opportunity for prehospital sonographic assessment of trauma, even in the hands of novice operators commonly found in military, maritime, or other austere environments. However, the reliability of such point-of-care ultrasound (POCUS) examinations by novices is rightly questioned. A common strategy being examined to mitigate this reliability gap is remote mentoring by an expert. OBJECTIVES: To assess the feasibility of utilizing POCUS in the hands of novice military or civilian emergency medicine service (EMS) providers, with and without the use of telementoring. To assess the mitigating or exacerbating effect telementoring may have on operator stress. METHODS: Thirty-seven inexperienced physicians and EMTs serving as first responders in military or civilian EMS were randomized to receive or not receive telementoring during three POCUS trials: live model, Simbionix trainer, and jugular phantom. Salivary cortisol was obtained before and after the trial. Heart rate variability monitoring was performed throughout the trial. RESULTS: There were no significant differences in clinical performance between the two groups. Iatrogenic complications of jugular venous catheterization were reduced by 26% in the telementored group (P < 0.001). Salivary cortisol levels dropped by 39% (P < 0.001) in the telementored group. Heart rate variability data also suggested mitigation of stress. CONCLUSIONS: Telementoring of POCUS tasks was not found to improve performance by novices, but findings suggest that it may mitigate caregiver stress.


Asunto(s)
Servicios Médicos de Urgencia , Sistemas de Atención de Punto , Humanos , Hidrocortisona , Reproducibilidad de los Resultados , Ultrasonografía
13.
Am J Surg ; 224(2): 769-774, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35379484

RESUMEN

INTRODUCTION: Exsanguination is the most preventable cause of death. Paradigms such as STOP THE BLEED recognize increased responsibility among the less experienced with Wound Packing (WP) being a critical skill. As even trained providers may perform poorly, we compared Video-modelling (VM), a form of behavioural modelling involving video demonstration prior to intervention against remote telementoring (RTM) involving remote real-time expert-guidance. METHODS: Search and Rescue (SAR-Techs), trained in WP were asked to pack a wound on a standardized simulator randomized to RMT, VM, or control. RESULTS: 24 SAR-Techs (median age 37, median 16.5 years experience) participated. Controls were consistently faster than RTM (p = 0.005) and VM (p = 0.000), with no difference between RTM and VM. However, 50% (n = 4) Controls failed to pack properly, compared to 100% success in both VM and RTM, despite all SAR-Techs feeling the task was "easy". DISCUSSION: Performance of a life-saving technique was improved through either VM or RTM, suggesting that both techniques are beneficial and complementary to each other. Further work should be extended to law enforcement/lay public to examine logistical challenges.


Asunto(s)
Telemedicina , Adulto , Vendajes , Hemorragia/prevención & control , Humanos , Proyectos Piloto , Telemedicina/métodos
14.
Can J Surg ; 65(2): E242-E249, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35365497

RESUMEN

BACKGROUND: Early hemorrhage control after interpersonal violence is the most urgent requirement to preserve life and is now recognized as a responsibility of law enforcement. Although earlier entry of first responders is advocated, many shooting scenes remain unsafe for humans, necessitating first responses conducted by robots. Thus, robotic hemorrhage control warrants study as a care-under-fire treatment option. METHODS: Two bomb disposal robots (Wolverine and Dragon Runner) were retrofitted with hemostatic wound clamps. The robots' ability to apply a wound clamp to a simulated extremity exsanguination while controlled by 4 experienced operators was tested. The operators were randomly assigned to perform 10 trials using 1 robot each. A third surveillance robot (Stair Climber) provided further visualization for the operators. We assessed the success rate of the application of the wound clamp to the simulated wound, the time to application of the wound clamp and the amount of fluid loss. We also assessed the operators' efforts to apply the wound clamp after an initial attempt was unsuccessful or after the wound clamp was dropped. RESULTS: Remote robotic application of a wound clamp was demonstrated to be feasible, with complete cessation of simulated bleeding in 60% of applications. This finding was consistent across all operators and both robots. There was no difference in the success rates with the 2 robots (p = 1.00). However, there were differences in fluid loss (p = 0.004) and application time (p < 0.001), with the larger (Wolverine) robot being faster and losing less fluid. CONCLUSION: Law enforcement tactical robots were consistently able to provide partial to complete hemorrhage control in a simulated extremity exsanguination. Consideration should be given to using this approach in care-under-fire and care-behind-the-barricade scenarios as well as further developing the technology and doctrine for robotic hemorrhage control.


Asunto(s)
Bombas (Dispositivos Explosivos) , Hemostáticos , Robótica , Constricción , Hemorragia/etiología , Hemorragia/prevención & control , Humanos
15.
Sci Rep ; 12(1): 4504, 2022 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-35296714

RESUMEN

Adoption of robotic-assisted surgery has steadily increased as it improves the surgeon's dexterity and visualization. Despite these advantages, the success of a robotic procedure is highly dependent on the availability of a proficient surgical assistant that can collaborate with the surgeon. With the introduction of novel medical devices, the surgeon has taken over some of the surgical assistant's tasks to increase their independence. This, however, has also resulted in surgeons experiencing higher levels of cognitive demands that can lead to reduced performance. In this work, we proposed a neurotechnology-based semi-autonomous assistant to release the main surgeon of the additional cognitive demands of a critical support task: blood suction. To create a more synergistic collaboration between the surgeon and the robotic assistant, a real-time cognitive workload assessment system based on EEG signals and eye-tracking was introduced. A computational experiment demonstrates that cognitive workload can be effectively detected with an 80% accuracy. Then, we show how the surgical performance can be improved by using the neurotechnological autonomous assistant as a close feedback loop to prevent states of high cognitive demands. Our findings highlight the potential of utilizing real-time cognitive workload assessments to improve the collaboration between an autonomous algorithm and the surgeon.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Cirujanos , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Succión , Carga de Trabajo
16.
Prehosp Disaster Med ; 37(1): 71-77, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35177133

RESUMEN

BACKGROUND: New care paradigms are required to enable remote life-saving interventions (RLSIs) in extreme environments such as disaster settings. Informatics may assist through just-in-time expert remote-telementoring (RTM) or video-modelling (VM). Currently, RTM relies on real-time communication that may not be reliable in some locations, especially if communications fail. Neither technique has been extensively developed however, and both may be required to be performed by inexperienced providers to save lives. A pilot comparison was thus conducted. METHODS: Procedure-naïve Search-and-Rescue Technicians (SAR-Techs) performed a tube-thoracostomy (TT) on a surgical simulator, randomly allocated to RTM or VM. The VM group watched a pre-prepared video illustrating TT immediately prior, while the RTM group were remotely guided by an expert in real-time. Standard outcomes included success, safety, and tube-security for the TT procedure. RESULTS: There were no differences in experience between the groups. Of the 13 SAR-Techs randomized to VM, 12/13 (92%) placed the TT successfully, safely, and secured it properly, while 100% (11/11) of the TT placed by the RTM group were successful, safe, and secure. Statistically, there was no difference (P = 1.000) between RTM or VM in safety, success, or tube security. However, with VM, one subject cut himself, one did not puncture the pleura, and one had barely adequate placement. There were no such issues in the mentored group. Total time was significantly faster using RTM (P = .02). However, if time-to-watch was discounted, VM was quicker (P = .000). CONCLUSIONS: Random evaluation revealed both paradigms have attributes. If VM can be utilized during "travel-time," it is quicker but without facilitating "trouble shooting." On the other hand, RTM had no errors in TT placement and facilitated guidance and remediation by the mentor, presumably avoiding failure, increasing safety, and potentially providing psychological support. Ultimately, both techniques appear to have merit and may be complementary, justifying continued research into the human-factors of performing RLSIs in extreme environments that are likely needed in natural and man-made disasters.


Asunto(s)
Tubos Torácicos , Toracostomía , Humanos , Proyectos Piloto , Toracostomía/métodos
17.
Annu Rev Biomed Eng ; 23: 115-139, 2021 07 13.
Artículo en Inglés | MEDLINE | ID: mdl-33770455

RESUMEN

Telemedicine is perhaps the most rapidly growing area in health care. Approximately 15 million Americans receive medical assistance remotely every year. Yet rural communities face significant challenges in securing subspecialist care. In the United States, 25% of the population resides in rural areas, where less than 15% of physicians work. Current surgery residency programs do not adequately prepare surgeons for rural practice. Telementoring, wherein a remote expert guides a less experienced caregiver, has been proposed to address this challenge. Nonetheless, existing mentoring technologies are not widely available to rural communities, due to a lack of infrastructure and mentor availability. For this reason, some clinicians prefer simpler and more reliable technologies. This article presents past and current telementoring systems, with a focus on rural settings, and proposes aset of requirements for such systems. We conclude with a perspective on the future of telementoring systems and the integration of artificial intelligence within those systems.


Asunto(s)
Tutoría , Cirujanos , Telemedicina , Inteligencia Artificial , Humanos , Población Rural , Estados Unidos
18.
Mil Med ; 186(Suppl 1): 288-294, 2021 01 25.
Artículo en Inglés | MEDLINE | ID: mdl-33499518

RESUMEN

INTRODUCTION: Short response time is critical for future military medical operations in austere settings or remote areas. Such effective patient care at the point of injury can greatly benefit from the integration of semi-autonomous robotic systems. To achieve autonomy, robots would require massive libraries of maneuvers collected with the goal of training machine learning algorithms. Although this is attainable in controlled settings, obtaining surgical data in austere settings can be difficult. Hence, in this article, we present the Dexterous Surgical Skill (DESK) database for knowledge transfer between robots. The peg transfer task was selected as it is one of the six main tasks of laparoscopic training. In addition, we provide a machine learning framework to evaluate novel transfer learning methodologies on this database. METHODS: A set of surgical gestures was collected for a peg transfer task, composed of seven atomic maneuvers referred to as surgemes. The collected Dexterous Surgical Skill dataset comprises a set of surgical robotic skills using the four robotic platforms: Taurus II, simulated Taurus II, YuMi, and the da Vinci Research Kit. Then, we explored two different learning scenarios: no-transfer and domain-transfer. In the no-transfer scenario, the training and testing data were obtained from the same domain; whereas in the domain-transfer scenario, the training data are a blend of simulated and real robot data, which are tested on a real robot. RESULTS: Using simulation data to train the learning algorithms enhances the performance on the real robot where limited or no real data are available. The transfer model showed an accuracy of 81% for the YuMi robot when the ratio of real-tosimulated data were 22% to 78%. For the Taurus II and the da Vinci, the model showed an accuracy of 97.5% and 93%, respectively, training only with simulation data. CONCLUSIONS: The results indicate that simulation can be used to augment training data to enhance the performance of learned models in real scenarios. This shows potential for the future use of surgical data from the operating room in deployable surgical robots in remote areas.


Asunto(s)
Robótica , Competencia Clínica , Simulación por Computador , Humanos , Laparoscopía , Aprendizaje Automático
19.
Appl Ergon ; 90: 103251, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32961465

RESUMEN

Training of surgeons is essential for safe and effective use of robotic surgery, yet current assessment tools for learning progression are limited. The objective of this study was to measure changes in trainees' cognitive and behavioral states as they progressed in a robotic surgeon training curriculum at a medical institution. Seven surgical trainees in urology who had no formal robotic training experience participated in the simulation curriculum. They performed 12 robotic skills exercises with varying levels of difficulty repetitively in separate sessions. EEG (electroencephalogram) activity and eye movements were measured throughout to calculate three metrics: engagement index (indicator of task engagement), pupil diameter (indicator of mental workload) and gaze entropy (indicator of randomness in gaze pattern). Performance scores (completion of task goals) and mental workload ratings (NASA-Task Load Index) were collected after each exercise. Changes in performance scores between training sessions were calculated. Analysis of variance, repeated measures correlation, and machine learning classification were used to diagnose how cognitive and behavioral states associate with performance increases or decreases between sessions. The changes in performance were correlated with changes in engagement index (rrm=-.25,p<.001) and gaze entropy (rrm=-.37,p<.001). Changes in cognitive and behavioral states were able to predict training outcomes with 72.5% accuracy. Findings suggest that cognitive and behavioral metrics correlate with changes in performance between sessions. These measures can complement current feedback tools used by medical educators and learners for skills assessment in robotic surgery training.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Entrenamiento Simulado , Cirujanos , Competencia Clínica , Curriculum , Humanos , Carga de Trabajo
20.
IEEE Trans Hum Mach Syst ; 50(5): 434-443, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-33005497

RESUMEN

Choosing adequate gestures for touchless interfaces is a challenging task that has a direct impact on human-computer interaction. Such gestures are commonly determined by the designer, ad-hoc, rule-based or agreement-based methods. Previous approaches to assess agreement grouped the gestures into equivalence classes and ignored the integral properties that are shared between them. In this work, we propose a generalized framework that inherently incorporates the gesture descriptors into the agreement analysis (GDA). In contrast to previous approaches, we represent gestures using binary description vectors and allow them to be partially similar. In this context, we introduce a new metric referred to as Soft Agreement Rate ( S A R ) to measure the level of agreement and provide a mathematical justification for this metric. Further, we performed computational experiments to study the behavior of S A R and demonstrate that existing agreement metrics are a special case of our approach. Our method was evaluated and tested through a guessability study conducted with a group of neurosurgeons. Nevertheless, our formulation can be applied to any other user-elicitation study. Results show that the level of agreement obtained by S A R is 2.64 times higher than the previous metrics. Finally, we show that our approach complements the existing agreement techniques by generating an artificial lexicon based on the most agreed properties.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...