Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 52
Filtrar
1.
Annu Rev Biomed Eng ; 2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-37832939

RESUMO

Assistive technologies (AT) enable people with disabilities to perform activities of daily living more independently, have greater access to community and healthcare services, and be more productive performing educational and/or employment tasks. Integrating artificial intelligence (AI) with various agents, including electronics, robotics, and software, has revolutionized AT, resulting in groundbreaking technologies such as mind-controlled exoskeletons, bionic limbs, intelligent wheelchairs, and smart home assistants. This article provides a review of various AI techniques that have helped those with physical disabilities, including brain-computer interfaces, computer vision, natural language processing, and human-computer interaction. The current challenges and future directions for AI-powered advanced technologies are also addressed. Expected final online publication date for the Annual Review of Biomedical Engineering, Volume 26 is May 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

2.
Sensors (Basel) ; 23(9)2023 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-37177557

RESUMO

Previous studies in robotic-assisted surgery (RAS) have studied cognitive workload by modulating surgical task difficulty, and many of these studies have relied on self-reported workload measurements. However, contributors to and their effects on cognitive workload are complex and may not be sufficiently summarized by changes in task difficulty alone. This study aims to understand how multi-task requirement contributes to the prediction of cognitive load in RAS under different task difficulties. Multimodal physiological signals (EEG, eye-tracking, HRV) were collected as university students performed simulated RAS tasks consisting of two types of surgical task difficulty under three different multi-task requirement levels. EEG spectral analysis was sensitive enough to distinguish the degree of cognitive workload under both surgical conditions (surgical task difficulty/multi-task requirement). In addition, eye-tracking measurements showed differences under both conditions, but significant differences of HRV were observed in only multi-task requirement conditions. Multimodal-based neural network models have achieved up to 79% accuracy for both surgical conditions.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Análise e Desempenho de Tarefas , Carga de Trabalho/psicologia , Autorrelato , Redes Neurais de Computação
3.
Can J Surg ; 66(6): E522-E534, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37914210

RESUMO

People suffering from critical injuries/illness face marked challenges before transportation to definitive care. Solutions to diagnose and intervene in the prehospital setting are required to improve outcomes. Despite advances in artificial intelligence and robotics, near-term practical interventions for catastrophic injuries/illness will require humans to perform unfamiliar, uncomfortable and risky interventions. Development of posttraumatic stress disorder is already disproportionately high among first responders and correlates with uncertainty and doubts concerning decisions, actions and inactions. Technologies such as remote telementoring (RTM) may enable such interventions and will hopefully decrease potential stress for first responders. How thought processes may be remotely assisted using RTM and other technologies should be studied urgently. We need to understand if the use of cognitively offloading technologies such as RTM will alleviate, or at least not exacerbate, the psychological stresses currently disabling first responders.


Assuntos
Inteligência Artificial , Serviços Médicos de Emergência , Humanos , Cognição
4.
Hum Factors ; 65(5): 737-758, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-33241945

RESUMO

OBJECTIVE: The goal of this systematic literature review is to investigate the relationship between indirect physiological measurements and direct measures of situation awareness (SA). BACKGROUND: Across different environments and tasks, assessments of SA are often performed using techniques designed specifically to directly measure SA, such as SAGAT, SPAM, and/or SART. However, research suggests that indirect physiological sensing methods may also be capable of predicting SA. Currently, it is unclear which particular physiological approaches are sensitive to changes in SA. METHOD: Seven databases were searched using the PRISMA reporting guidelines. Eligibility criteria included human-subject experiments that used at least one direct SA assessment technique, as well as at least one physiological measurement. Information extracted from each article was the physiological metric(s), the direct SA measurement(s), the correlation between these two metrics, and the experimental task(s). All studies underwent a quality assessment. RESULTS: Twenty-five articles were included in this review. Eye tracking techniques were the most commonly used physiological measures, and correlations between conscious aspects of eye movement measures and direct SA scores were observed. Evidence for cardiovascular predictors of SA were mixed. EEG studies were too few to form strong conclusions, but were consistently positive. CONCLUSION: Further investigation is needed to methodically collect more relevant data and comprehensively model the relationships between a wider range of physiological measurements and direct assessments of SA. APPLICATION: This review will guide researchers and practitioners in methods to indirectly assess SA with sensors and highlight opportunities for future research on wearables and SA.


Assuntos
Conscientização , Movimentos Oculares , Humanos , Conscientização/fisiologia , Reprodutibilidade dos Testes , Previsões
5.
Annu Rev Biomed Eng ; 23: 115-139, 2021 07 13.
Artigo em Inglês | MEDLINE | ID: mdl-33770455

RESUMO

Telemedicine is perhaps the most rapidly growing area in health care. Approximately 15 million Americans receive medical assistance remotely every year. Yet rural communities face significant challenges in securing subspecialist care. In the United States, 25% of the population resides in rural areas, where less than 15% of physicians work. Current surgery residency programs do not adequately prepare surgeons for rural practice. Telementoring, wherein a remote expert guides a less experienced caregiver, has been proposed to address this challenge. Nonetheless, existing mentoring technologies are not widely available to rural communities, due to a lack of infrastructure and mentor availability. For this reason, some clinicians prefer simpler and more reliable technologies. This article presents past and current telementoring systems, with a focus on rural settings, and proposes aset of requirements for such systems. We conclude with a perspective on the future of telementoring systems and the integration of artificial intelligence within those systems.


Assuntos
Tutoria , Cirurgiões , Telemedicina , Inteligência Artificial , Humanos , População Rural , Estados Unidos
6.
Can J Surg ; 65(2): E242-E249, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35365497

RESUMO

BACKGROUND: Early hemorrhage control after interpersonal violence is the most urgent requirement to preserve life and is now recognized as a responsibility of law enforcement. Although earlier entry of first responders is advocated, many shooting scenes remain unsafe for humans, necessitating first responses conducted by robots. Thus, robotic hemorrhage control warrants study as a care-under-fire treatment option. METHODS: Two bomb disposal robots (Wolverine and Dragon Runner) were retrofitted with hemostatic wound clamps. The robots' ability to apply a wound clamp to a simulated extremity exsanguination while controlled by 4 experienced operators was tested. The operators were randomly assigned to perform 10 trials using 1 robot each. A third surveillance robot (Stair Climber) provided further visualization for the operators. We assessed the success rate of the application of the wound clamp to the simulated wound, the time to application of the wound clamp and the amount of fluid loss. We also assessed the operators' efforts to apply the wound clamp after an initial attempt was unsuccessful or after the wound clamp was dropped. RESULTS: Remote robotic application of a wound clamp was demonstrated to be feasible, with complete cessation of simulated bleeding in 60% of applications. This finding was consistent across all operators and both robots. There was no difference in the success rates with the 2 robots (p = 1.00). However, there were differences in fluid loss (p = 0.004) and application time (p < 0.001), with the larger (Wolverine) robot being faster and losing less fluid. CONCLUSION: Law enforcement tactical robots were consistently able to provide partial to complete hemorrhage control in a simulated extremity exsanguination. Consideration should be given to using this approach in care-under-fire and care-behind-the-barricade scenarios as well as further developing the technology and doctrine for robotic hemorrhage control.


Assuntos
Bombas (Dispositivos Explosivos) , Hemostáticos , Robótica , Constrição , Hemorragia/etiologia , Hemorragia/prevenção & controle , Humanos
7.
Hum Factors ; : 187208221129940, 2022 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-36367971

RESUMO

OBJECTIVE: This study developed and evaluated a mental workload-based adaptive automation (MWL-AA) that monitors surgeon cognitive load and assist during cognitively demanding tasks and assists surgeons in robotic-assisted surgery (RAS). BACKGROUND: The introduction of RAS makes operators overwhelmed. The need for precise, continuous assessment of human mental workload (MWL) states is important to identify when the interventions should be delivered to moderate operators' MWL. METHOD: The MWL-AA presented in this study was a semi-autonomous suction tool. The first experiment recruited ten participants to perform surgical tasks under different MWL levels. The physiological responses were captured and used to develop a real-time multi-sensing model for MWL detection. The second experiment evaluated the effectiveness of the MWL-AA, where nine brand-new surgical trainees performed the surgical task with and without the MWL-AA. Mixed effect models were used to compare task performance, objective- and subjective-measured MWL. RESULTS: The proposed system predicted high MWL hemorrhage conditions with an accuracy of 77.9%. For the MWL-AA evaluation, the surgeons' gaze behaviors and brain activities suggested lower perceived MWL with MWL-AA than without. This was further supported by lower self-reported MWL and better task performance in the task condition with MWL-AA. CONCLUSION: A MWL-AA systems can reduce surgeons' workload and improve performance in a high-stress hemorrhaging scenario. Findings highlight the potential of utilizing MWL-AA to enhance the collaboration between the autonomous system and surgeons. Developing a robust and personalized MWL-AA is the first step that can be used do develop additional use cases in future studies. APPLICATION: The proposed framework can be expanded and applied to more complex environments to improve human-robot collaboration.

8.
Isr Med Assoc J ; 24(9): 596-601, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36168179

RESUMO

BACKGROUND: Handheld ultrasound devices present an opportunity for prehospital sonographic assessment of trauma, even in the hands of novice operators commonly found in military, maritime, or other austere environments. However, the reliability of such point-of-care ultrasound (POCUS) examinations by novices is rightly questioned. A common strategy being examined to mitigate this reliability gap is remote mentoring by an expert. OBJECTIVES: To assess the feasibility of utilizing POCUS in the hands of novice military or civilian emergency medicine service (EMS) providers, with and without the use of telementoring. To assess the mitigating or exacerbating effect telementoring may have on operator stress. METHODS: Thirty-seven inexperienced physicians and EMTs serving as first responders in military or civilian EMS were randomized to receive or not receive telementoring during three POCUS trials: live model, Simbionix trainer, and jugular phantom. Salivary cortisol was obtained before and after the trial. Heart rate variability monitoring was performed throughout the trial. RESULTS: There were no significant differences in clinical performance between the two groups. Iatrogenic complications of jugular venous catheterization were reduced by 26% in the telementored group (P < 0.001). Salivary cortisol levels dropped by 39% (P < 0.001) in the telementored group. Heart rate variability data also suggested mitigation of stress. CONCLUSIONS: Telementoring of POCUS tasks was not found to improve performance by novices, but findings suggest that it may mitigate caregiver stress.


Assuntos
Serviços Médicos de Emergência , Sistemas Automatizados de Assistência Junto ao Leito , Humanos , Hidrocortisona , Reprodutibilidade dos Testes , Ultrassonografia
9.
Exp Brain Res ; 238(3): 537-550, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31974755

RESUMO

Electroencephalography (EEG) activity in the mu frequency band (8-13 Hz) is suppressed during both gesture performance and observation. However, it is not clear if or how particular characteristics within the kinematic execution of gestures map onto dynamic changes in mu activity. Mapping the time course of gesture kinematics onto that of mu activity could help understand which aspects of gestures capture attention and aid in the classification of communicative intent. In this work, we test whether the timing of inflection points within gesture kinematics predicts the occurrence of oscillatory mu activity during passive gesture observation. The timing for salient features of performed gestures in video stimuli was determined by isolating inflection points in the hands' motion trajectories. Participants passively viewed the gesture videos while continuous EEG data was collected. We used wavelet analysis to extract mu oscillations at 11 Hz and at central electrodes and occipital electrodes. We used linear regression to test for associations between the timing of inflection points in motion trajectories and mu oscillations that generalized across gesture stimuli. Separately, we also tested whether inflection point occurrences evoked mu/alpha responses that generalized across participants. Across all gestures and inflection points, and pooled across participants, peaks in 11 Hz EEG waveforms were detected 465 and 535 ms after inflection points at occipital and central electrodes, respectively. A regression model showed that inflection points in the motion trajectories strongly predicted subsequent mu oscillations ([Formula: see text]<0.01); effects were weaker and non-significant for low (17 Hz) and high (21 Hz) beta activity. When segmented by inflection point occurrence rather than stimulus onset and testing participants as a random effect, inflection points evoked mu and beta activity from 308 to 364 ms at central electrodes, and broad activity from 226 to 800 ms at occipital electrodes. The results suggest that inflection points in gesture trajectories elicit coordinated activity in the visual and motor cortices, with prominent activity in the mu/alpha frequency band and extending into the beta frequency band. The time course of activity indicates that visual processing drives subsequent activity in the motor cortex during gesture processing, with a lag of approximately 80 ms.


Assuntos
Atenção/fisiologia , Ondas Encefálicas/fisiologia , Fenômenos Eletrofisiológicos/fisiologia , Gestos , Adolescente , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Neurônios-Espelho/fisiologia , Córtex Motor/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
10.
Hum Factors ; 62(8): 1365-1386, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-31560573

RESUMO

OBJECTIVE: The aim of this study is to assess the relationship between eye-tracking measures and perceived workload in robotic surgical tasks. BACKGROUND: Robotic techniques provide improved dexterity, stereoscopic vision, and ergonomic control system over laparoscopic surgery, but the complexity of the interfaces and operations may pose new challenges to surgeons and compromise patient safety. Limited studies have objectively quantified workload and its impact on performance in robotic surgery. Although not yet implemented in robotic surgery, minimally intrusive and continuous eye-tracking metrics have been shown to be sensitive to changes in workload in other domains. METHODS: Eight surgical trainees participated in 15 robotic skills simulation sessions. In each session, participants performed up to 12 simulated exercises. Correlation and mixed-effects analyses were conducted to explore the relationships between eye-tracking metrics and perceived workload. Machine learning classifiers were used to determine the sensitivity of differentiating between low and high workload with eye-tracking features. RESULTS: Gaze entropy increased as perceived workload increased, with a correlation of .51. Pupil diameter and gaze entropy distinguished differences in workload between task difficulty levels, and both metrics increased as task level difficulty increased. The classification model using eye-tracking features achieved an accuracy of 84.7% in predicting workload levels. CONCLUSION: Eye-tracking measures can detect perceived workload during robotic tasks. They can potentially be used to identify task contributors to high workload and provide measures for robotic surgery training. APPLICATION: Workload assessment can be used for real-time monitoring of workload in robotic surgical training and provide assessments for performance and learning.


Assuntos
Procedimentos Cirúrgicos Robóticos , Benchmarking , Competência Clínica , Tecnologia de Rastreamento Ocular , Humanos , Carga de Trabalho
11.
Ann Surg ; 270(2): 384-389, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-29672404

RESUMO

OBJECTIVE: This study investigates the benefits of a surgical telementoring system based on an augmented reality head-mounted display (ARHMD) that overlays surgical instructions directly onto the surgeon's view of the operating field, without workspace obstruction. SUMMARY BACKGROUND DATA: In conventional telestrator-based telementoring, the surgeon views annotations of the surgical field by shifting focus to a nearby monitor, which substantially increases cognitive load. As an alternative, tablets have been used between the surgeon and the patient to display instructions; however, tablets impose additional obstructions of surgeon's motions. METHODS: Twenty medical students performed anatomical marking (Task1) and abdominal incision (Task2) on a patient simulator, in 1 of 2 telementoring conditions: ARHMD and telestrator. The dependent variables were placement error, number of focus shifts, and completion time. Furthermore, workspace efficiency was quantified as the number and duration of potential surgeon-tablet collisions avoided by the ARHMD. RESULTS: The ARHMD condition yielded smaller placement errors (Task1: 45%, P < 0.001; Task2: 14%, P = 0.01), fewer focus shifts (Task1: 93%, P < 0.001; Task2: 88%, P = 0.0039), and longer completion times (Task1: 31%, P < 0.001; Task2: 24%, P = 0.013). Furthermore, the ARHMD avoided potential tablet collisions (4.8 for 3.2 seconds in Task1; 3.8 for 1.3 seconds in Task2). CONCLUSION: The ARHMD system promises to improve accuracy and to eliminate focus shifts in surgical telementoring. Because ARHMD participants were able to refine their execution of instructions, task completion time increased. Unlike a tablet system, the ARHMD does not require modifying natural motions to avoid collisions.


Assuntos
Realidade Aumentada , Educação Médica/métodos , Cirurgia Geral/educação , Monitorização Intraoperatória/métodos , Simulação de Paciente , Procedimentos Cirúrgicos Operatórios/educação , Telemedicina/métodos , Adulto , Feminino , Humanos , Imageamento Tridimensional , Masculino , Cirurgiões/educação , Adulto Jovem
12.
Can J Surg ; 62(6): E13-E15, 2019 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-31782650

RESUMO

Summary: Providing the earliest hemorrhage control is now recognized as a shared responsibility of all members of society, including both the lay public and professionals, consistent with the Stop the Bleed campaign. However, providing early hemorrhage control in a hostile environment, such as the scene of a mass shooting, is extremely challenging. In such settings, the first access to a bleeding victim may be robotic. An all-purpose bomb robot was thus retrofitted with a commercial, off-the-shelf wound clamp and successfully applied to an extremity exsanguination simulator as a demonstration of remote robotic hemorrhage control. As this method can potentially control extremity hemorrhage, further development of the techniques, equipment and, most importantly, the guidelines and rules of engagement should continue. We suggest that in order to minimize the loss of life during an active shooter incident, the armamentarium of prehospital medical resources may be extended to include law-enforcement robots.


Assuntos
Serviços Médicos de Emergência , Hemorragia/terapia , Técnicas Hemostáticas/instrumentação , Robótica , Humanos
13.
Artigo em Inglês | MEDLINE | ID: mdl-38598406

RESUMO

Autonomous Ultrasound Image Quality Assessment (US-IQA) is a promising tool to aid the interpretation by practicing sonographers and to enable the future robotization of ultrasound procedures. However, autonomous US-IQA has several challenges. Ultrasound images contain many spurious artifacts, such as noise due to handheld probe positioning, errors in the selection of probe parameters and patient respiration during the procedure. Further, these images are highly variable in appearance with respect to the individual patient's physiology. We propose to use a deep Convolutional Neural Network (CNN), USQNet, which utilizes a Multi-scale and Local-to-Global Second-order Pooling (MS-L2GSoP) classifier to conduct the sonographer-like assessment of image quality. This classifier first extracts features at multiple scales to encode the inter-patient anatomical variations, similar to a sonographer's understanding of anatomy. Then, it uses second-order pooling in the intermediate layers (local) and at the end of the network (global) to exploit the second-order statistical dependency of multi-scale structural and multi-region textural features. The L2GSoP will capture the higher-order relationships between different spatial locations and provide the seed for correlating local patches, much like a sonographer prioritizes regions across the image. We experimentally validated the USQNet for a new dataset of the human urinary bladder ultrasound images. The validation involved first with the subjective assessment by experienced radiologists' annotation, and then with state-of-the-art CNN networks for US-IQA and its ablated counterparts. The results demonstrate that USQNet achieves a remarkable accuracy of 92.4% and outperforms the SOTA models by 3 - 14% while requiring comparable computation time.

14.
Surg Innov ; 20(4): 377-84, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23037804

RESUMO

BACKGROUND: The standard practice in the operating room (OR) is having a surgical technician deliver surgical instruments to the surgeon quickly and inexpensively, as required. This human "in the loop" system may result in mistakes (eg, missing information, ambiguity of instructions, and delays). OBJECTIVE: Errors can be reduced or eliminated by integrating information technology (IT) and cybernetics into the OR. Gesture and voice automatic acquisition, processing, and interpretation allow interaction with these new systems without disturbing the normal flow of surgery. METHODS: This article describes the development of a cyber-physical management system (CPS), including a robotic scrub nurse, to support surgeons by passing surgical instruments during surgery as required and recording counts of surgical instruments into a personal health record (PHR). The robot used responds to hand signals and voice messages detected through sophisticated computer vision and data mining techniques. RESULTS: The CPS was tested during a mock surgery in the OR. The in situ experiment showed that the robot recognized hand gestures reliably (with an accuracy of 97%), it can retrieve instruments as close as 25 mm, and the total delivery time was less than 3 s on average. CONCLUSIONS: This online health tool allows the exchange of clinical and surgical information to electronic medical record-based and PHR-based applications among different hospitals, regardless of the style viewer. The CPS has the potential to be adopted in the OR to handle surgical instruments and track them in a safe and accurate manner, releasing the human scrub tech from these tasks.


Assuntos
Cibernética/instrumentação , Salas Cirúrgicas , Robótica/instrumentação , Cirurgia Assistida por Computador/instrumentação , Instrumentos Cirúrgicos , Cibernética/métodos , Desenho de Equipamento , Gestos , Humanos , Auxiliares de Cirurgia , Reconhecimento Automatizado de Padrão , Software , Cirurgia Assistida por Computador/métodos
15.
Medicina (B Aires) ; 73(6): 539-42, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24356263

RESUMO

This paper discusses the challenges and innovations related to the use of telementoring systems in the operating room. Most of the systems presented leverage on three types of interaction channels: audio, visual and physical. The audio channel enables the mentor to verbally instruct the trainee, and allows the trainee to ask questions. The visual channel is used to deliver annotations, alerts and other messages graphically to the trainee during the surgery. These visual representations are often displayed through a telestrator. The physical channel has been used in laparoscopic procedures by partially controlling the laparoscope through force-feedback. While in face to face instruction, the mentor produces gestures to convey certain aspects of the surgical instruction, there is not equivalent of this form of physical interaction between the mentor and trainee in open surgical procedures in telementoring systems. Even that the trend is to perform more minimally invasive surgery (MIS), trauma surgeries are still necessary, where initial resuscitation and stabilization of the patient in a timely manner is crucial. This paper presents a preliminary study conducted at the Indiana University Medical School and Purdue University, where initial lexicons of surgical instructive gestures (SIGs) were determined through systematic observation when mentor and trainee operate together. The paper concludes with potential ways to convey gestural information through surgical robots.


Assuntos
Educação a Distância/métodos , Educação Médica Continuada/métodos , Salas Cirúrgicas , Robótica/métodos , Cirurgia Assistida por Computador/educação , Telemedicina/métodos , Recursos Audiovisuais , Gestos , Humanos , Invenções , Sistemas Homem-Máquina , Mentores , Robótica/educação , Materiais de Ensino , Telemedicina/instrumentação
16.
IEEE Trans Artif Intell ; 4(6): 1472-1483, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38090475

RESUMO

Zero-shot learning (ZSL) is a paradigm in transfer learning that aims to recognize unknown categories by having a mere description of them. The problem of ZSL has been thoroughly studied in the domain of static object recognition, however, ZSL for dynamic events (ZSER) such as activities and gestures has hardly been investigated. In this context, this paper addresses ZSER by relying on semantic attributes of events to transfer the learned knowledge from seen classes to unseen ones. First, we utilized the Amazon Mechanical Turk platform to create the first attribute-based gesture dataset, referred to as ZSGL, comprising the categories present in MSRC and Italian gesture datasets. Overall, our ZSGL dataset consisted of 26 categories, 65 discriminative attributes, and 16 attribute annotations and 400 examples per category. We used trainable recurrent networks and 3D CNNs to learn the spatio-temporal features. Next, we propose a simple yet effective end-to-end approach for ZSER, referred to as Joint Sequential Semantic Encoder (JSSE), to explore temporal patterns, to efficiently represent events in the latent space, and to simultaneously optimize for both the semantic and classification tasks. We evaluate our model on ZSGL and two action datasets (UCF and HMDB), and compared the performance of JSSE against several existing baselines in four experimental conditions: 1. Within-category, 2. Across-category, 3. Closed-set, and 4. Open-Set. Results show that JSSE considerably outperforms (p<0.05) other approaches and performs favorably for both the datasets in all experimental conditions.

17.
Mil Med ; 188(Suppl 6): 412-419, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37948233

RESUMO

INTRODUCTION: Remote military operations require rapid response times for effective relief and critical care. Yet, the military theater is under austere conditions, so communication links are unreliable and subject to physical and virtual attacks and degradation at unpredictable times. Immediate medical care at these austere locations requires semi-autonomous teleoperated systems, which enable the completion of medical procedures even under interrupted networks while isolating the medics from the dangers of the battlefield. However, to achieve autonomy for complex surgical and critical care procedures, robots require extensive programming or massive libraries of surgical skill demonstrations to learn effective policies using machine learning algorithms. Although such datasets are achievable for simple tasks, providing a large number of demonstrations for surgical maneuvers is not practical. This article presents a method for learning from demonstration, combining knowledge from demonstrations to eliminate reward shaping in reinforcement learning (RL). In addition to reducing the data required for training, the self-supervised nature of RL, in conjunction with expert knowledge-driven rewards, produces more generalizable policies tolerant to dynamic environment changes. A multimodal representation for interaction enables learning complex contact-rich surgical maneuvers. The effectiveness of the approach is shown using the cricothyroidotomy task, as it is a standard procedure seen in critical care to open the airway. In addition, we also provide a method for segmenting the teleoperator's demonstration into subtasks and classifying the subtasks using sequence modeling. MATERIALS AND METHODS: A database of demonstrations for the cricothyroidotomy task was collected, comprising six fundamental maneuvers referred to as surgemes. The dataset was collected by teleoperating a collaborative robotic platform-SuperBaxter, with modified surgical grippers. Then, two learning models are developed for processing the dataset-one for automatic segmentation of the task demonstrations into a sequence of surgemes and the second for classifying each segment into labeled surgemes. Finally, a multimodal off-policy RL with rewards learned from demonstrations was developed to learn the surgeme execution from these demonstrations. RESULTS: The task segmentation model has an accuracy of 98.2%. The surgeme classification model using the proposed interaction features achieved a classification accuracy of 96.25% averaged across all surgemes compared to 87.08% without these features and 85.4% using a support vector machine classifier. Finally, the robot execution achieved a task success rate of 93.5% compared to baselines of behavioral cloning (78.3%) and a twin-delayed deep deterministic policy gradient with shaped rewards (82.6%). CONCLUSIONS: Results indicate that the proposed interaction features for the segmentation and classification of surgical tasks improve classification accuracy. The proposed method for learning surgemes from demonstrations exceeds popular methods for skill learning. The effectiveness of the proposed approach demonstrates the potential for future remote telemedicine on battlefields.


Assuntos
Robótica , Cirurgia Assistida por Computador , Humanos , Robótica/métodos , Algoritmos , Cirurgia Assistida por Computador/métodos , Aprendizado de Máquina
18.
Mil Med ; 188(Suppl 6): 480-487, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37948270

RESUMO

INTRODUCTION: Increased complexity in robotic-assisted surgical system interfaces introduces problems with human-robot collaboration that result in excessive mental workload (MWL), adversely impacting a surgeon's task performance and increasing error probability. Real-time monitoring of the operator's MWL will aid in identifying when and how interventions can be best provided to moderate MWL. In this study, an MWL-based adaptive automation system is constructed and evaluated for its effectiveness during robotic-assisted surgery. MATERIALS AND METHODS: This study recruited 10 participants first to perform surgical tasks under different cognitive workload levels. Physiological signals were obtained and employed to build a real-time system for cognitive workload monitoring. To evaluate the effectiveness of the proposed system, 15 participants were recruited to perform the surgical task with and without the proposed system. The participants' task performance and perceived workload were collected and compared. RESULTS: The proposed neural network model achieved an accuracy of 77.9% in cognitive workload classification. In addition, better task performance and lower perceived workload were observed when participants completed the experimental task under the task condition supplemented with adaptive aiding using the proposed system. CONCLUSIONS: The proposed MWL monitoring system successfully diminished the perceived workload of participants and increased their task performance under high-stress conditions via interventions by a semi-autonomous suction tool. The preliminary results from the comparative study show the potential impact of automated adaptive aiding systems in enhancing surgical task performance via cognitive workload-triggered interventions in robotic-assisted surgery.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Análise e Desempenho de Tarefas , Carga de Trabalho , Automação
19.
Mil Med ; 188(Suppl 6): 208-214, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37948255

RESUMO

INTRODUCTION: U.S. Military healthcare providers increasingly perform prolonged casualty care because of operations in settings with prolonged evacuation times. Varied training and experience mean that this care may fall to providers unfamiliar with providing critical care. Telemedicine tools with audiovisual capabilities, artificial intelligence (AI), and augmented reality (AR) can enhance inexperienced personnel's competence and confidence when providing prolonged casualty care. Furthermore, implementing offline functionality provides assistance options in communications-limited settings. The intent of the Trauma TeleHelper for Operational Medical Procedure Support and Offline Network (THOMPSON) is to develop (1) a voice-controlled mobile application with video references for procedural guidance, (2) audio narration of each video using procedure mentoring scripts, and (3) an AI-guided intervention system using AR overlay and voice command to create immersive video modeling. These capabilities will be available offline and in downloadable format. MATERIALS AND METHODS: The Trauma THOMPSON platform is in development. Focus groups of subject matter experts will identify appropriate procedures and best practices. Procedural video recordings will be collected to develop reference materials for the Trauma THOMPSON mobile application and to train a machine learning algorithm on action recognition and anticipation. Finally, an efficacy evaluation of the application will be conducted in a simulated environment. RESULTS: Preliminary video collection has been initiated for tube thoracostomy, needle decompression, cricothyrotomy, intraosseous access, and tourniquet application. Initial results from the machine learning algorithm show action recognition and anticipation accuracies of 20.1% and 11.4%, respectively, in unscripted datasets "in the wild," notably on a limited dataset. This system performs over 100 times better than a random prediction. CONCLUSIONS: Developing a platform to provide real-time, offline support will deliver the benefits of synchronous expert advice within communications-limited and remote environments. Trauma THOMPSON has the potential to fill an important gap for clinical decision support tools in these settings.


Assuntos
Realidade Aumentada , Sistemas de Apoio a Decisões Clínicas , Humanos , Inteligência Artificial , Comunicação , Algoritmos
20.
Mil Med ; 188(Suppl 6): 674-681, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37948279

RESUMO

INTRODUCTION: Between 5% and 20% of all combat-related casualties are attributed to burn wounds. A decrease in the mortality rate of burns by about 36% can be achieved with early treatment, but this is contingent upon accurate characterization of the burn. Precise burn injury classification is recognized as a crucial aspect of the medical artificial intelligence (AI) field. An autonomous AI system designed to analyze multiple characteristics of burns using modalities including ultrasound and RGB images is described. MATERIALS AND METHODS: A two-part dataset is created for the training and validation of the AI: in vivo B-mode ultrasound scans collected from porcine subjects (10,085 frames), and RGB images manually collected from web sources (338 images). The framework in use leverages an explanation system to corroborate and integrate burn expert's knowledge, suggesting new features and ensuring the validity of the model. Through the utilization of this framework, it is discovered that B-mode ultrasound classifiers can be enhanced by supplying textural features. More specifically, it is confirmed that statistical texture features extracted from ultrasound frames can increase the accuracy of the burn depth classifier. RESULTS: The system, with all included features selected using explainable AI, is capable of classifying burn depth with accuracy and F1 average above 80%. Additionally, the segmentation module has been found capable of segmenting with a mean global accuracy greater than 84%, and a mean intersection-over-union score over 0.74. CONCLUSIONS: This work demonstrates the feasibility of accurate and automated burn characterization for AI and indicates that these systems can be improved with additional features when a human expert is combined with explainable AI. This is demonstrated on real data (human for segmentation and porcine for depth classification) and establishes the groundwork for further deep-learning thrusts in the area of burn analysis.


Assuntos
Inteligência Artificial , Queimaduras , Humanos , Suínos , Animais , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa