Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
1.
Annu Rev Biomed Eng ; 2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-37832939

RESUMO

Assistive technologies (AT) enable people with disabilities to perform activities of daily living more independently, have greater access to community and healthcare services, and be more productive performing educational and/or employment tasks. Integrating artificial intelligence (AI) with various agents, including electronics, robotics, and software, has revolutionized AT, resulting in groundbreaking technologies such as mind-controlled exoskeletons, bionic limbs, intelligent wheelchairs, and smart home assistants. This article provides a review of various AI techniques that have helped those with physical disabilities, including brain-computer interfaces, computer vision, natural language processing, and human-computer interaction. The current challenges and future directions for AI-powered advanced technologies are also addressed. Expected final online publication date for the Annual Review of Biomedical Engineering, Volume 26 is May 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

2.
Sensors (Basel) ; 23(9)2023 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-37177557

RESUMO

Previous studies in robotic-assisted surgery (RAS) have studied cognitive workload by modulating surgical task difficulty, and many of these studies have relied on self-reported workload measurements. However, contributors to and their effects on cognitive workload are complex and may not be sufficiently summarized by changes in task difficulty alone. This study aims to understand how multi-task requirement contributes to the prediction of cognitive load in RAS under different task difficulties. Multimodal physiological signals (EEG, eye-tracking, HRV) were collected as university students performed simulated RAS tasks consisting of two types of surgical task difficulty under three different multi-task requirement levels. EEG spectral analysis was sensitive enough to distinguish the degree of cognitive workload under both surgical conditions (surgical task difficulty/multi-task requirement). In addition, eye-tracking measurements showed differences under both conditions, but significant differences of HRV were observed in only multi-task requirement conditions. Multimodal-based neural network models have achieved up to 79% accuracy for both surgical conditions.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Análise e Desempenho de Tarefas , Carga de Trabalho/psicologia , Autorrelato , Redes Neurais de Computação
3.
Hum Factors ; 65(5): 737-758, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-33241945

RESUMO

OBJECTIVE: The goal of this systematic literature review is to investigate the relationship between indirect physiological measurements and direct measures of situation awareness (SA). BACKGROUND: Across different environments and tasks, assessments of SA are often performed using techniques designed specifically to directly measure SA, such as SAGAT, SPAM, and/or SART. However, research suggests that indirect physiological sensing methods may also be capable of predicting SA. Currently, it is unclear which particular physiological approaches are sensitive to changes in SA. METHOD: Seven databases were searched using the PRISMA reporting guidelines. Eligibility criteria included human-subject experiments that used at least one direct SA assessment technique, as well as at least one physiological measurement. Information extracted from each article was the physiological metric(s), the direct SA measurement(s), the correlation between these two metrics, and the experimental task(s). All studies underwent a quality assessment. RESULTS: Twenty-five articles were included in this review. Eye tracking techniques were the most commonly used physiological measures, and correlations between conscious aspects of eye movement measures and direct SA scores were observed. Evidence for cardiovascular predictors of SA were mixed. EEG studies were too few to form strong conclusions, but were consistently positive. CONCLUSION: Further investigation is needed to methodically collect more relevant data and comprehensively model the relationships between a wider range of physiological measurements and direct assessments of SA. APPLICATION: This review will guide researchers and practitioners in methods to indirectly assess SA with sensors and highlight opportunities for future research on wearables and SA.


Assuntos
Conscientização , Movimentos Oculares , Humanos , Conscientização/fisiologia , Reprodutibilidade dos Testes , Previsões
4.
Annu Rev Biomed Eng ; 23: 115-139, 2021 07 13.
Artigo em Inglês | MEDLINE | ID: mdl-33770455

RESUMO

Telemedicine is perhaps the most rapidly growing area in health care. Approximately 15 million Americans receive medical assistance remotely every year. Yet rural communities face significant challenges in securing subspecialist care. In the United States, 25% of the population resides in rural areas, where less than 15% of physicians work. Current surgery residency programs do not adequately prepare surgeons for rural practice. Telementoring, wherein a remote expert guides a less experienced caregiver, has been proposed to address this challenge. Nonetheless, existing mentoring technologies are not widely available to rural communities, due to a lack of infrastructure and mentor availability. For this reason, some clinicians prefer simpler and more reliable technologies. This article presents past and current telementoring systems, with a focus on rural settings, and proposes aset of requirements for such systems. We conclude with a perspective on the future of telementoring systems and the integration of artificial intelligence within those systems.


Assuntos
Tutoria , Cirurgiões , Telemedicina , Inteligência Artificial , Humanos , População Rural , Estados Unidos
5.
Hum Factors ; : 187208221129940, 2022 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-36367971

RESUMO

OBJECTIVE: This study developed and evaluated a mental workload-based adaptive automation (MWL-AA) that monitors surgeon cognitive load and assist during cognitively demanding tasks and assists surgeons in robotic-assisted surgery (RAS). BACKGROUND: The introduction of RAS makes operators overwhelmed. The need for precise, continuous assessment of human mental workload (MWL) states is important to identify when the interventions should be delivered to moderate operators' MWL. METHOD: The MWL-AA presented in this study was a semi-autonomous suction tool. The first experiment recruited ten participants to perform surgical tasks under different MWL levels. The physiological responses were captured and used to develop a real-time multi-sensing model for MWL detection. The second experiment evaluated the effectiveness of the MWL-AA, where nine brand-new surgical trainees performed the surgical task with and without the MWL-AA. Mixed effect models were used to compare task performance, objective- and subjective-measured MWL. RESULTS: The proposed system predicted high MWL hemorrhage conditions with an accuracy of 77.9%. For the MWL-AA evaluation, the surgeons' gaze behaviors and brain activities suggested lower perceived MWL with MWL-AA than without. This was further supported by lower self-reported MWL and better task performance in the task condition with MWL-AA. CONCLUSION: A MWL-AA systems can reduce surgeons' workload and improve performance in a high-stress hemorrhaging scenario. Findings highlight the potential of utilizing MWL-AA to enhance the collaboration between the autonomous system and surgeons. Developing a robust and personalized MWL-AA is the first step that can be used do develop additional use cases in future studies. APPLICATION: The proposed framework can be expanded and applied to more complex environments to improve human-robot collaboration.

6.
Exp Brain Res ; 238(3): 537-550, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31974755

RESUMO

Electroencephalography (EEG) activity in the mu frequency band (8-13 Hz) is suppressed during both gesture performance and observation. However, it is not clear if or how particular characteristics within the kinematic execution of gestures map onto dynamic changes in mu activity. Mapping the time course of gesture kinematics onto that of mu activity could help understand which aspects of gestures capture attention and aid in the classification of communicative intent. In this work, we test whether the timing of inflection points within gesture kinematics predicts the occurrence of oscillatory mu activity during passive gesture observation. The timing for salient features of performed gestures in video stimuli was determined by isolating inflection points in the hands' motion trajectories. Participants passively viewed the gesture videos while continuous EEG data was collected. We used wavelet analysis to extract mu oscillations at 11 Hz and at central electrodes and occipital electrodes. We used linear regression to test for associations between the timing of inflection points in motion trajectories and mu oscillations that generalized across gesture stimuli. Separately, we also tested whether inflection point occurrences evoked mu/alpha responses that generalized across participants. Across all gestures and inflection points, and pooled across participants, peaks in 11 Hz EEG waveforms were detected 465 and 535 ms after inflection points at occipital and central electrodes, respectively. A regression model showed that inflection points in the motion trajectories strongly predicted subsequent mu oscillations ([Formula: see text]<0.01); effects were weaker and non-significant for low (17 Hz) and high (21 Hz) beta activity. When segmented by inflection point occurrence rather than stimulus onset and testing participants as a random effect, inflection points evoked mu and beta activity from 308 to 364 ms at central electrodes, and broad activity from 226 to 800 ms at occipital electrodes. The results suggest that inflection points in gesture trajectories elicit coordinated activity in the visual and motor cortices, with prominent activity in the mu/alpha frequency band and extending into the beta frequency band. The time course of activity indicates that visual processing drives subsequent activity in the motor cortex during gesture processing, with a lag of approximately 80 ms.


Assuntos
Atenção/fisiologia , Ondas Encefálicas/fisiologia , Fenômenos Eletrofisiológicos/fisiologia , Gestos , Adolescente , Adulto , Eletroencefalografia/métodos , Feminino , Humanos , Masculino , Neurônios-Espelho/fisiologia , Córtex Motor/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Visual/fisiologia , Adulto Jovem
7.
Surg Innov ; 20(4): 377-84, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23037804

RESUMO

BACKGROUND: The standard practice in the operating room (OR) is having a surgical technician deliver surgical instruments to the surgeon quickly and inexpensively, as required. This human "in the loop" system may result in mistakes (eg, missing information, ambiguity of instructions, and delays). OBJECTIVE: Errors can be reduced or eliminated by integrating information technology (IT) and cybernetics into the OR. Gesture and voice automatic acquisition, processing, and interpretation allow interaction with these new systems without disturbing the normal flow of surgery. METHODS: This article describes the development of a cyber-physical management system (CPS), including a robotic scrub nurse, to support surgeons by passing surgical instruments during surgery as required and recording counts of surgical instruments into a personal health record (PHR). The robot used responds to hand signals and voice messages detected through sophisticated computer vision and data mining techniques. RESULTS: The CPS was tested during a mock surgery in the OR. The in situ experiment showed that the robot recognized hand gestures reliably (with an accuracy of 97%), it can retrieve instruments as close as 25 mm, and the total delivery time was less than 3 s on average. CONCLUSIONS: This online health tool allows the exchange of clinical and surgical information to electronic medical record-based and PHR-based applications among different hospitals, regardless of the style viewer. The CPS has the potential to be adopted in the OR to handle surgical instruments and track them in a safe and accurate manner, releasing the human scrub tech from these tasks.


Assuntos
Cibernética/instrumentação , Salas Cirúrgicas , Robótica/instrumentação , Cirurgia Assistida por Computador/instrumentação , Instrumentos Cirúrgicos , Cibernética/métodos , Desenho de Equipamento , Gestos , Humanos , Auxiliares de Cirurgia , Reconhecimento Automatizado de Padrão , Software , Cirurgia Assistida por Computador/métodos
8.
Medicina (B Aires) ; 73(6): 539-42, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24356263

RESUMO

This paper discusses the challenges and innovations related to the use of telementoring systems in the operating room. Most of the systems presented leverage on three types of interaction channels: audio, visual and physical. The audio channel enables the mentor to verbally instruct the trainee, and allows the trainee to ask questions. The visual channel is used to deliver annotations, alerts and other messages graphically to the trainee during the surgery. These visual representations are often displayed through a telestrator. The physical channel has been used in laparoscopic procedures by partially controlling the laparoscope through force-feedback. While in face to face instruction, the mentor produces gestures to convey certain aspects of the surgical instruction, there is not equivalent of this form of physical interaction between the mentor and trainee in open surgical procedures in telementoring systems. Even that the trend is to perform more minimally invasive surgery (MIS), trauma surgeries are still necessary, where initial resuscitation and stabilization of the patient in a timely manner is crucial. This paper presents a preliminary study conducted at the Indiana University Medical School and Purdue University, where initial lexicons of surgical instructive gestures (SIGs) were determined through systematic observation when mentor and trainee operate together. The paper concludes with potential ways to convey gestural information through surgical robots.


Assuntos
Educação a Distância/métodos , Educação Médica Continuada/métodos , Salas Cirúrgicas , Robótica/métodos , Cirurgia Assistida por Computador/educação , Telemedicina/métodos , Recursos Audiovisuais , Gestos , Humanos , Invenções , Sistemas Homem-Máquina , Mentores , Robótica/educação , Materiais de Ensino , Telemedicina/instrumentação
9.
IEEE Trans Artif Intell ; 4(6): 1472-1483, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38090475

RESUMO

Zero-shot learning (ZSL) is a paradigm in transfer learning that aims to recognize unknown categories by having a mere description of them. The problem of ZSL has been thoroughly studied in the domain of static object recognition, however, ZSL for dynamic events (ZSER) such as activities and gestures has hardly been investigated. In this context, this paper addresses ZSER by relying on semantic attributes of events to transfer the learned knowledge from seen classes to unseen ones. First, we utilized the Amazon Mechanical Turk platform to create the first attribute-based gesture dataset, referred to as ZSGL, comprising the categories present in MSRC and Italian gesture datasets. Overall, our ZSGL dataset consisted of 26 categories, 65 discriminative attributes, and 16 attribute annotations and 400 examples per category. We used trainable recurrent networks and 3D CNNs to learn the spatio-temporal features. Next, we propose a simple yet effective end-to-end approach for ZSER, referred to as Joint Sequential Semantic Encoder (JSSE), to explore temporal patterns, to efficiently represent events in the latent space, and to simultaneously optimize for both the semantic and classification tasks. We evaluate our model on ZSGL and two action datasets (UCF and HMDB), and compared the performance of JSSE against several existing baselines in four experimental conditions: 1. Within-category, 2. Across-category, 3. Closed-set, and 4. Open-Set. Results show that JSSE considerably outperforms (p<0.05) other approaches and performs favorably for both the datasets in all experimental conditions.

10.
Mil Med ; 188(Suppl 6): 412-419, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37948233

RESUMO

INTRODUCTION: Remote military operations require rapid response times for effective relief and critical care. Yet, the military theater is under austere conditions, so communication links are unreliable and subject to physical and virtual attacks and degradation at unpredictable times. Immediate medical care at these austere locations requires semi-autonomous teleoperated systems, which enable the completion of medical procedures even under interrupted networks while isolating the medics from the dangers of the battlefield. However, to achieve autonomy for complex surgical and critical care procedures, robots require extensive programming or massive libraries of surgical skill demonstrations to learn effective policies using machine learning algorithms. Although such datasets are achievable for simple tasks, providing a large number of demonstrations for surgical maneuvers is not practical. This article presents a method for learning from demonstration, combining knowledge from demonstrations to eliminate reward shaping in reinforcement learning (RL). In addition to reducing the data required for training, the self-supervised nature of RL, in conjunction with expert knowledge-driven rewards, produces more generalizable policies tolerant to dynamic environment changes. A multimodal representation for interaction enables learning complex contact-rich surgical maneuvers. The effectiveness of the approach is shown using the cricothyroidotomy task, as it is a standard procedure seen in critical care to open the airway. In addition, we also provide a method for segmenting the teleoperator's demonstration into subtasks and classifying the subtasks using sequence modeling. MATERIALS AND METHODS: A database of demonstrations for the cricothyroidotomy task was collected, comprising six fundamental maneuvers referred to as surgemes. The dataset was collected by teleoperating a collaborative robotic platform-SuperBaxter, with modified surgical grippers. Then, two learning models are developed for processing the dataset-one for automatic segmentation of the task demonstrations into a sequence of surgemes and the second for classifying each segment into labeled surgemes. Finally, a multimodal off-policy RL with rewards learned from demonstrations was developed to learn the surgeme execution from these demonstrations. RESULTS: The task segmentation model has an accuracy of 98.2%. The surgeme classification model using the proposed interaction features achieved a classification accuracy of 96.25% averaged across all surgemes compared to 87.08% without these features and 85.4% using a support vector machine classifier. Finally, the robot execution achieved a task success rate of 93.5% compared to baselines of behavioral cloning (78.3%) and a twin-delayed deep deterministic policy gradient with shaped rewards (82.6%). CONCLUSIONS: Results indicate that the proposed interaction features for the segmentation and classification of surgical tasks improve classification accuracy. The proposed method for learning surgemes from demonstrations exceeds popular methods for skill learning. The effectiveness of the proposed approach demonstrates the potential for future remote telemedicine on battlefields.


Assuntos
Robótica , Cirurgia Assistida por Computador , Humanos , Robótica/métodos , Algoritmos , Cirurgia Assistida por Computador/métodos , Aprendizado de Máquina
11.
Mil Med ; 188(Suppl 6): 480-487, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37948270

RESUMO

INTRODUCTION: Increased complexity in robotic-assisted surgical system interfaces introduces problems with human-robot collaboration that result in excessive mental workload (MWL), adversely impacting a surgeon's task performance and increasing error probability. Real-time monitoring of the operator's MWL will aid in identifying when and how interventions can be best provided to moderate MWL. In this study, an MWL-based adaptive automation system is constructed and evaluated for its effectiveness during robotic-assisted surgery. MATERIALS AND METHODS: This study recruited 10 participants first to perform surgical tasks under different cognitive workload levels. Physiological signals were obtained and employed to build a real-time system for cognitive workload monitoring. To evaluate the effectiveness of the proposed system, 15 participants were recruited to perform the surgical task with and without the proposed system. The participants' task performance and perceived workload were collected and compared. RESULTS: The proposed neural network model achieved an accuracy of 77.9% in cognitive workload classification. In addition, better task performance and lower perceived workload were observed when participants completed the experimental task under the task condition supplemented with adaptive aiding using the proposed system. CONCLUSIONS: The proposed MWL monitoring system successfully diminished the perceived workload of participants and increased their task performance under high-stress conditions via interventions by a semi-autonomous suction tool. The preliminary results from the comparative study show the potential impact of automated adaptive aiding systems in enhancing surgical task performance via cognitive workload-triggered interventions in robotic-assisted surgery.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Análise e Desempenho de Tarefas , Carga de Trabalho , Automação
12.
Sci Rep ; 12(1): 4504, 2022 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-35296714

RESUMO

Adoption of robotic-assisted surgery has steadily increased as it improves the surgeon's dexterity and visualization. Despite these advantages, the success of a robotic procedure is highly dependent on the availability of a proficient surgical assistant that can collaborate with the surgeon. With the introduction of novel medical devices, the surgeon has taken over some of the surgical assistant's tasks to increase their independence. This, however, has also resulted in surgeons experiencing higher levels of cognitive demands that can lead to reduced performance. In this work, we proposed a neurotechnology-based semi-autonomous assistant to release the main surgeon of the additional cognitive demands of a critical support task: blood suction. To create a more synergistic collaboration between the surgeon and the robotic assistant, a real-time cognitive workload assessment system based on EEG signals and eye-tracking was introduced. A computational experiment demonstrates that cognitive workload can be effectively detected with an 80% accuracy. Then, we show how the surgical performance can be improved by using the neurotechnological autonomous assistant as a close feedback loop to prevent states of high cognitive demands. Our findings highlight the potential of utilizing real-time cognitive workload assessments to improve the collaboration between an autonomous algorithm and the surgeon.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgiões , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Sucção , Carga de Trabalho
13.
Prehosp Disaster Med ; 37(1): 71-77, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35177133

RESUMO

BACKGROUND: New care paradigms are required to enable remote life-saving interventions (RLSIs) in extreme environments such as disaster settings. Informatics may assist through just-in-time expert remote-telementoring (RTM) or video-modelling (VM). Currently, RTM relies on real-time communication that may not be reliable in some locations, especially if communications fail. Neither technique has been extensively developed however, and both may be required to be performed by inexperienced providers to save lives. A pilot comparison was thus conducted. METHODS: Procedure-naïve Search-and-Rescue Technicians (SAR-Techs) performed a tube-thoracostomy (TT) on a surgical simulator, randomly allocated to RTM or VM. The VM group watched a pre-prepared video illustrating TT immediately prior, while the RTM group were remotely guided by an expert in real-time. Standard outcomes included success, safety, and tube-security for the TT procedure. RESULTS: There were no differences in experience between the groups. Of the 13 SAR-Techs randomized to VM, 12/13 (92%) placed the TT successfully, safely, and secured it properly, while 100% (11/11) of the TT placed by the RTM group were successful, safe, and secure. Statistically, there was no difference (P = 1.000) between RTM or VM in safety, success, or tube security. However, with VM, one subject cut himself, one did not puncture the pleura, and one had barely adequate placement. There were no such issues in the mentored group. Total time was significantly faster using RTM (P = .02). However, if time-to-watch was discounted, VM was quicker (P = .000). CONCLUSIONS: Random evaluation revealed both paradigms have attributes. If VM can be utilized during "travel-time," it is quicker but without facilitating "trouble shooting." On the other hand, RTM had no errors in TT placement and facilitated guidance and remediation by the mentor, presumably avoiding failure, increasing safety, and potentially providing psychological support. Ultimately, both techniques appear to have merit and may be complementary, justifying continued research into the human-factors of performing RLSIs in extreme environments that are likely needed in natural and man-made disasters.


Assuntos
Tubos Torácicos , Toracostomia , Humanos , Projetos Piloto , Toracostomia/métodos
14.
IEEE Trans Neural Syst Rehabil Eng ; 28(4): 1032-1041, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31841416

RESUMO

Individuals who are blind adopt multiple procedures to tactually explore images. Automatically recognizing and classifying users' exploration behaviors is the first step towards the development of an intelligent system that could assist users to explore images more efficiently. In this paper, a computational framework was developed to classify different procedures used by blind users during image exploration. Translation-, rotation- and scale-invariant features were extracted from the trajectories of users' movements. These features were divided as numerical and logical features and were fed into neural networks. More specifically, we trained spiking neural networks (SNNs) to further encode the numerical features as model strings. The proposed framework employed a distance-based classification scheme to determine the final class/label of the exploratory procedures. Dempster-Shafter Theory (DST) was applied to integrate the distances obtained from all the features. Through the experiments of different dynamics of spiking neurons, the proposed framework achieved a good performance with 95.89% classification accuracy. It is extremely effective in encoding and classifying spatio-temporal data, as compared to Dynamic Time Warping and Hidden Markov Model with 61.30% and 28.70% accuracy. The proposed framework serves as the fundamental block for the development of intelligent interfaces, enhancing the image exploration experience for the blind.


Assuntos
Comportamento Exploratório , Modelos Neurológicos , Humanos , Redes Neurais de Computação , Neurônios
15.
Mil Med ; 185(Suppl 1): 67-72, 2020 01 07.
Artigo em Inglês | MEDLINE | ID: mdl-32074324

RESUMO

INTRODUCTION: Hemorrhage control is a basic task required of first responders and typically requires technical interventions during stressful circumstances. Remote telementoring (RTM) utilizes information technology to guide inexperienced providers, but when this is useful remains undefined. METHODS: Military medics were randomized to mentoring or not from an experienced subject matter expert during the application of a wound clamp (WC) to a simulated bleed. Inexperienced, nonmentored medics were given a 30-second safety briefing; mentored medics were not. Objective outcomes were time to task completion and success in arresting simulated bleeding. RESULTS: Thirty-three medics participated (16 mentored and 17 nonmentored). All (100%) successfully applies the WC to arrest the simulated hemorrhage. RTM significantly slowed hemorrhage control (P = 0.000) between the mentored (40.4 ± 12.0 seconds) and nonmentored (15.2 ± 10.3 seconds) groups. On posttask questionnaire, all medics subjectively rated the difficulty of the wound clamping as 1.7/10 (10 being extremely hard). Discussion: WC application appeared to be an easily acquired technique that was effective in controlling simulated extremity exsanguination, such that RTM while feasible did not improve outcomes. Limitations were the lack of true stress and using simulation for the task. Future research should focus on determining when RTM is useful and when it is not required.


Assuntos
Auxiliares de Emergência/normas , Hemorragia/terapia , Instrumentos Cirúrgicos , Ferimentos e Lesões/terapia , Competência Clínica/normas , Competência Clínica/estatística & dados numéricos , Auxiliares de Emergência/estatística & dados numéricos , Hemorragia/prevenção & controle , Humanos , Tutoria/normas , Tutoria/estatística & dados numéricos , Inquéritos e Questionários , Ferimentos e Lesões/complicações
16.
Mil Med ; 185(Suppl 1): 513-520, 2020 01 07.
Artigo em Inglês | MEDLINE | ID: mdl-32074347

RESUMO

INTRODUCTION: Point-of-injury (POI) care requires immediate specialized assistance but delays and expertise lapses can lead to complications. In such scenarios, telementoring can benefit health practitioners by transmitting guidance from remote specialists. However, current telementoring systems are not appropriate for POI care. This article clinically evaluates our System for Telementoring with Augmented Reality (STAR), a novel telementoring system based on an augmented reality head-mounted display. The system is portable, self-contained, and displays virtual surgical guidance onto the operating field. These capabilities can facilitate telementoring in POI scenarios while mitigating limitations of conventional telementoring systems. METHODS: Twenty participants performed leg fasciotomies on cadaveric specimens under either one of two experimental conditions: telementoring using STAR; or without telementoring but reviewing the procedure beforehand. An expert surgeon evaluated the participants' performance in terms of completion time, number of errors, and procedure-related scores. Additional metrics included a self-reported confidence score and postexperiment questionnaires. RESULTS: STAR effectively delivered surgical guidance to nonspecialist health practitioners: participants using STAR performed fewer errors and obtained higher procedure-related scores. CONCLUSIONS: This work validates STAR as a viable surgical telementoring platform, which could be further explored to aid in scenarios where life-saving care must be delivered in a prehospital setting.


Assuntos
Educação Médica Continuada/normas , Fasciotomia/métodos , Tutoria/normas , Telemedicina/normas , Realidade Aumentada , Cadáver , Educação Médica Continuada/métodos , Educação Médica Continuada/estatística & dados numéricos , Fasciotomia/estatística & dados numéricos , Humanos , Indiana , Tutoria/métodos , Tutoria/estatística & dados numéricos , Avaliação de Programas e Projetos de Saúde/métodos , Telemedicina/métodos , Telemedicina/estatística & dados numéricos
17.
Surgery ; 167(4): 724-731, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31916990

RESUMO

BACKGROUND: The surgical workforce particularly in rural regions needs novel approaches to reinforce the skills and confidence of health practitioners. Although conventional telementoring systems have proven beneficial to address this gap, the benefits of platforms of augmented reality-based telementoring in the coaching and confidence of medical personnel are yet to be evaluated. METHODS: A total of 20 participants were guided by remote expert surgeons to perform leg fasciotomies on cadavers under one of two conditions: (1) telementoring (with our System for Telementoring with Augmented Reality) or (2) independently reviewing the procedure beforehand. Using the Individual Performance Score and the Weighted Individual Performance Score, two on-site, expert surgeons evaluated the participants. Postexperiment metrics included number of errors, procedure completion time, and self-reported confidence scores. A total of six objective measurements were obtained to describe the self-reported confidence scores and the overall quality of the coaching. Additional analyses were performed based on the participants' expertise level. RESULTS: Participants using the System for Telementoring with Augmented Reality received 10% greater Weighted Individual Performance Score (P = .03) and performed 67% fewer errors (P = .04). Moreover, participants with lower surgical expertise that used the System for Telementoring with Augmented Reality received 17% greater Individual Performance Score (P = .04), 32% greater Weighted Individual Performance Score (P < .01) and performed 92% fewer errors (P < .001). In addition, participants using the System for Telementoring with Augmented Reality reported 25% more confidence in all evaluated aspects (P < .03). On average, participants using the System for Telementoring with Augmented Reality received augmented reality guidance 19 times on average and received guidance for 47% of their total task completion time. CONCLUSION: Participants using the System for Telementoring with Augmented Reality performed leg fasciotomies with fewer errors and received better performance scores. In addition, participants using the System for Telementoring with Augmented Reality reported being more confident when performing fasciotomies under telementoring. Augmented Reality Head-Mounted Display-based telementoring successfully provided confidence and coaching to medical personnel.


Assuntos
Realidade Aumentada , Cirurgia Geral/educação , Tutoria/métodos , Telemedicina/métodos , Adulto , Feminino , Humanos , Masculino
18.
NPJ Digit Med ; 3: 75, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32509972

RESUMO

Telementoring platforms can help transfer surgical expertise remotely. However, most telementoring platforms are not designed to assist in austere, pre-hospital settings. This paper evaluates the system for telementoring with augmented reality (STAR), a portable and self-contained telementoring platform based on an augmented reality head-mounted display (ARHMD). The system is designed to assist in austere scenarios: a stabilized first-person view of the operating field is sent to a remote expert, who creates surgical instructions that a local first responder wearing the ARHMD can visualize as three-dimensional models projected onto the patient's body. Our hypothesis evaluated whether remote guidance with STAR could lead to performing a surgical procedure better, as opposed to remote audio-only guidance. Remote expert surgeons guided first responders through training cricothyroidotomies in a simulated austere scenario, and on-site surgeons evaluated the participants using standardized evaluation tools. The evaluation comprehended completion time and technique performance of specific cricothyroidotomy steps. The analyses were also performed considering the participants' years of experience as first responders, and their experience performing cricothyroidotomies. A linear mixed model analysis showed that using STAR was associated with higher procedural and non-procedural scores, and overall better performance. Additionally, a binary logistic regression analysis showed that using STAR was associated to safer and more successful executions of cricothyroidotomies. This work demonstrates that remote mentors can use STAR to provide first responders with guidance and surgical knowledge, and represents a first step towards the adoption of ARHMDs to convey clinical expertise remotely in austere scenarios.

19.
Mil Med ; 184(Suppl 1): 57-64, 2019 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-30901394

RESUMO

Combat trauma injuries require urgent and specialized care. When patient evacuation is infeasible, critical life-saving care must be given at the point of injury in real-time and under austere conditions associated to forward operating bases. Surgical telementoring allows local generalists to receive remote instruction from specialists thousands of miles away. However, current telementoring systems have limited annotation capabilities and lack of direct visualization of the future result of the surgical actions by the specialist. The System for Telementoring with Augmented Reality (STAR) is a surgical telementoring platform that improves the transfer of medical expertise by integrating a full-size interaction table for mentors to create graphical annotations, with augmented reality (AR) devices to display surgical annotations directly onto the generalist's field of view. Along with the explanation of the system's features, this paper provides results of user studies that validate STAR as a comprehensive AR surgical telementoring platform. In addition, potential future applications of STAR are discussed, which are desired features that state-of-the-art AR medical telementoring platforms should have when combat trauma scenarios are in the spotlight of such technologies.


Assuntos
Tutoria/métodos , Consulta Remota/métodos , Ensino/normas , Realidade Virtual , Humanos , Ensino/tendências
20.
Simul Healthc ; 14(1): 59-66, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30395078

RESUMO

INTRODUCTION: Surgical telementoring connects expert mentors with trainees performing urgent care in austere environments. However, such environments impose unreliable network quality, with significant latency and low bandwidth. We have developed an augmented reality telementoring system that includes future step visualization of the medical procedure. Pregenerated video instructions of the procedure are dynamically overlaid onto the trainee's view of the operating field when the network connection with a mentor is unreliable. METHODS: Our future step visualization uses a tablet suspended above the patient's body, through which the trainee views the operating field. Before trainee use, an expert records a "future library" of step-by-step video footage of the operation. Videos are displayed to the trainee as semitransparent graphical overlays. We conducted a study where participants completed a cricothyroidotomy under telementored guidance. Participants used one of two telementoring conditions: conventional telestrator or our system with future step visualization. During the operation, the connection between trainee and mentor was bandwidth throttled. Recorded metrics were idle time ratio, recall error, and task performance. RESULTS: Participants in the future step visualization condition had 48% smaller idle time ratio (14.5% vs. 27.9%, P < 0.001), 26% less recall error (119 vs. 161, P = 0.042), and 10% higher task performance scores (rater 1 = 90.83 vs. 81.88, P = 0.008; rater 2 = 88.54 vs. 79.17, P = 0.042) than participants in the telestrator condition. CONCLUSIONS: Future step visualization in surgical telementoring is an important fallback mechanism when trainee/mentor network connection is poor, and it is a key step towards semiautonomous and then completely mentor-free medical assistance systems.


Assuntos
Mentores , Procedimentos Cirúrgicos Operatórios/educação , Telemedicina/instrumentação , Interface Usuário-Computador , Competência Clínica , Computadores de Mão , Humanos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa