Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 94
Filtrar
1.
Int J Comput Assist Radiol Surg ; 19(7): 1359-1366, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38753135

RESUMO

PURPOSE: Preoperative imaging plays a pivotal role in sinus surgery where CTs offer patient-specific insights of complex anatomy, enabling real-time intraoperative navigation to complement endoscopy imaging. However, surgery elicits anatomical changes not represented in the preoperative model, generating an inaccurate basis for navigation during surgery progression. METHODS: We propose a first vision-based approach to update the preoperative 3D anatomical model leveraging intraoperative endoscopic video for navigated sinus surgery where relative camera poses are known. We rely on comparisons of intraoperative monocular depth estimates and preoperative depth renders to identify modified regions. The new depths are integrated in these regions through volumetric fusion in a truncated signed distance function representation to generate an intraoperative 3D model that reflects tissue manipulation RESULTS: We quantitatively evaluate our approach by sequentially updating models for a five-step surgical progression in an ex vivo specimen. We compute the error between correspondences from the updated model and ground-truth intraoperative CT in the region of anatomical modification. The resulting models show a decrease in error during surgical progression as opposed to increasing when no update is employed. CONCLUSION: Our findings suggest that preoperative 3D anatomical models can be updated using intraoperative endoscopy video in navigated sinus surgery. Future work will investigate improvements to monocular depth estimation as well as removing the need for external navigation systems. The resulting ability to continuously update the patient model may provide surgeons with a more precise understanding of the current anatomical state and paves the way toward a digital twin paradigm for sinus surgery.


Assuntos
Endoscopia , Imageamento Tridimensional , Modelos Anatômicos , Cirurgia Assistida por Computador , Tomografia Computadorizada por Raios X , Imageamento Tridimensional/métodos , Humanos , Endoscopia/métodos , Tomografia Computadorizada por Raios X/métodos , Cirurgia Assistida por Computador/métodos , Seios Paranasais/cirurgia , Seios Paranasais/diagnóstico por imagem
2.
Int J Comput Assist Radiol Surg ; 19(7): 1259-1266, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38775904

RESUMO

PURPOSE: Monocular SLAM algorithms are the key enabling technology for image-based surgical navigation systems for endoscopic procedures. Due to the visual feature scarcity and unique lighting conditions encountered in endoscopy, classical SLAM approaches perform inconsistently. Many of the recent approaches to endoscopic SLAM rely on deep learning models. They show promising results when optimized on singular domains such as arthroscopy, sinus endoscopy, colonoscopy or laparoscopy, but are limited by an inability to generalize to different domains without retraining. METHODS: To address this generality issue, we propose OneSLAM a monocular SLAM algorithm for surgical endoscopy that works out of the box for several endoscopic domains, including sinus endoscopy, colonoscopy, arthroscopy and laparoscopy. Our pipeline builds upon robust tracking any point (TAP) foundation models to reliably track sparse correspondences across multiple frames and runs local bundle adjustment to jointly optimize camera poses and a sparse 3D reconstruction of the anatomy. RESULTS: We compare the performance of our method against three strong baselines previously proposed for monocular SLAM in endoscopy and general scenes. OneSLAM presents better or comparable performance over existing approaches targeted to that specific data in all four tested domains, generalizing across domains without the need for retraining. CONCLUSION: OneSLAM benefits from the convincing performance of TAP foundation models but generalizes to endoscopic sequences of different anatomies all while demonstrating better or comparable performance over domain-specific SLAM approaches. Future research on global loop closure will investigate how to reliably detect loops in endoscopic scenes to reduce accumulated drift and enhance long-term navigation capabilities.


Assuntos
Algoritmos , Endoscopia , Humanos , Endoscopia/métodos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
3.
Nat Med ; 29(12): 3033-3043, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37985692

RESUMO

Pancreatic ductal adenocarcinoma (PDAC), the most deadly solid malignancy, is typically detected late and at an inoperable stage. Early or incidental detection is associated with prolonged survival, but screening asymptomatic individuals for PDAC using a single test remains unfeasible due to the low prevalence and potential harms of false positives. Non-contrast computed tomography (CT), routinely performed for clinical indications, offers the potential for large-scale screening, however, identification of PDAC using non-contrast CT has long been considered impossible. Here, we develop a deep learning approach, pancreatic cancer detection with artificial intelligence (PANDA), that can detect and classify pancreatic lesions with high accuracy via non-contrast CT. PANDA is trained on a dataset of 3,208 patients from a single center. PANDA achieves an area under the receiver operating characteristic curve (AUC) of 0.986-0.996 for lesion detection in a multicenter validation involving 6,239 patients across 10 centers, outperforms the mean radiologist performance by 34.1% in sensitivity and 6.3% in specificity for PDAC identification, and achieves a sensitivity of 92.9% and specificity of 99.9% for lesion detection in a real-world multi-scenario validation consisting of 20,530 consecutive patients. Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes. PANDA could potentially serve as a new tool for large-scale pancreatic cancer screening.


Assuntos
Carcinoma Ductal Pancreático , Aprendizado Profundo , Neoplasias Pancreáticas , Humanos , Inteligência Artificial , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/patologia , Tomografia Computadorizada por Raios X , Pâncreas/diagnóstico por imagem , Pâncreas/patologia , Carcinoma Ductal Pancreático/diagnóstico por imagem , Carcinoma Ductal Pancreático/patologia , Estudos Retrospectivos
4.
Int Urogynecol J ; 34(11): 2751-2758, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37449987

RESUMO

INTRODUCTION AND HYPOTHESIS: The objective was to study the effect of immediate pre-operative warm-up using virtual reality simulation on intraoperative robot-assisted laparoscopic hysterectomy (RALH) performance by gynecology trainees (residents and fellows). METHODS: We randomized the first, non-emergent RALH of the day that involved trainees warming up or not warming up. For cases assigned to warm-up, trainees performed a set of exercises on the da Vinci Skills Simulator immediately before the procedure. The supervising attending surgeon, who was not informed whether or not the trainee was assigned to warm-up, assessed the trainee's performance using the Objective Structured Assessment for Technical Skill (OSATS) and the Global Evaluative Assessment of Robotic Skills (GEARS) immediately after each surgery. RESULTS: We randomized 66 cases and analyzed 58 cases (30 warm-up, 28 no warm-up), which involved 21 trainees. Attending surgeons rated trainees similarly irrespective of warm-up randomization with mean (SD) OSATS composite scores of 22.6 (4.3; warm-up) vs 21.8 (3.4; no warm-up) and mean GEARS composite scores of 19.2 (3.8; warm-up) vs 18.8 (3.1; no warm-up). The difference in composite scores between warm-up and no warm-up was 0.34 (95% CI: -1.44, 2.13), and 0.34 (95% CI: -1.22, 1.90) for OSATS and GEARS respectively. Also, we did not observe any significant differences in each of the component/subscale scores within OSATS and GEARS between cases assigned to warm-up and no warm-up. CONCLUSION: Performing a brief virtual reality-based warm-up before RALH did not significantly improve the intraoperative performance of the trainees.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Robótica , Feminino , Humanos , Simulação por Computador , Histerectomia , Competência Clínica
5.
Int J Comput Assist Radiol Surg ; 18(7): 1135-1142, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37160580

RESUMO

PURPOSE: Recent advances in computer vision and machine learning have resulted in endoscopic video-based solutions for dense reconstruction of the anatomy. To effectively use these systems in surgical navigation, a reliable image-based technique is required to constantly track the endoscopic camera's position within the anatomy, despite frequent removal and re-insertion. In this work, we investigate the use of recent learning-based keypoint descriptors for six degree-of-freedom camera pose estimation in intraoperative endoscopic sequences and under changes in anatomy due to surgical resection. METHODS: Our method employs a dense structure from motion (SfM) reconstruction of the preoperative anatomy, obtained with a state-of-the-art patient-specific learning-based descriptor. During the reconstruction step, each estimated 3D point is associated with a descriptor. This information is employed in the intraoperative sequences to establish 2D-3D correspondences for Perspective-n-Point (PnP) camera pose estimation. We evaluate this method in six intraoperative sequences that include anatomical modifications obtained from two cadaveric subjects. RESULTS: Show that this approach led to translation and rotation errors of 3.9 mm and 0.2 radians, respectively, with 21.86% of localized cameras averaged over the six sequences. In comparison to an additional learning-based descriptor (HardNet++), the selected descriptor can achieve a better percentage of localized cameras with similar pose estimation performance. We further discussed potential error causes and limitations of the proposed approach. CONCLUSION: Patient-specific learning-based descriptors can relocalize images that are well distributed across the inspected anatomy, even where the anatomy is modified. However, camera relocalization in endoscopic sequences remains a persistently challenging problem, and future research is necessary to increase the robustness and accuracy of this technique.


Assuntos
Endoscopia , Cirurgia Assistida por Computador , Humanos , Endoscopia/métodos , Rotação
6.
Laryngoscope ; 133(3): 500-505, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-35357011

RESUMO

OBJECTIVE: Endoscopic surgery has a considerable learning curve due to dissociation of the visual-motor axes, coupled with decreased tactile feedback and mobility. In particular, endoscopic sinus surgery (ESS) lacks objective skill assessment metrics to provide specific feedback to trainees. This study aims to identify summary metrics from eye tracking, endoscope motion, and tool motion to objectively assess surgeons' ESS skill. METHODS: In this cross-sectional study, expert and novice surgeons performed ESS tasks of inserting an endoscope and tool into a cadaveric nose, touching an anatomical landmark, and withdrawing the endoscope and tool out of the nose. Tool and endoscope motion were collected using an electromagnetic tracker, and eye gaze was tracked using an infrared camera. Three expert surgeons provided binary assessments of low/high skill. 20 summary statistics were calculated for eye, tool, and endoscope motion and used in logistic regression models to predict surgical skill. RESULTS: 14 metrics (10 eye gaze, 2 tool motion, and 2 endoscope motion) were significantly different between surgeons with low and high skill. Models to predict skill for 6/9 ESS tasks had an AUC >0.95. A combined model of all tasks (AUC 0.95, PPV 0.93, NPV 0.89) included metrics from eye tracking data and endoscope motion, indicating that these metrics are transferable across tasks. CONCLUSIONS: Eye gaze, endoscope, and tool motion data can provide an objective and accurate measurement of ESS surgical performance. Incorporation of these algorithmic techniques intraoperatively could allow for automated skill assessment for trainees learning endoscopic surgery. LEVEL OF EVIDENCE: N/A Laryngoscope, 133:500-505, 2023.


Assuntos
Tecnologia de Rastreamento Ocular , Cirurgiões , Humanos , Estudos Transversais , Endoscopia , Endoscópios , Competência Clínica
7.
NPJ Digit Med ; 5(1): 100, 2022 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-35854145

RESUMO

The use of digital technology is increasing rapidly across surgical specialities, yet there is no consensus for the term 'digital surgery'. This is critical as digital health technologies present technical, governance, and legal challenges which are unique to the surgeon and surgical patient. We aim to define the term digital surgery and the ethical issues surrounding its clinical application, and to identify barriers and research goals for future practice. 38 international experts, across the fields of surgery, AI, industry, law, ethics and policy, participated in a four-round Delphi exercise. Issues were generated by an expert panel and public panel through a scoping questionnaire around key themes identified from the literature and voted upon in two subsequent questionnaire rounds. Consensus was defined if >70% of the panel deemed the statement important and <30% unimportant. A final online meeting was held to discuss consensus statements. The definition of digital surgery as the use of technology for the enhancement of preoperative planning, surgical performance, therapeutic support, or training, to improve outcomes and reduce harm achieved 100% consensus agreement. We highlight key ethical issues concerning data, privacy, confidentiality and public trust, consent, law, litigation and liability, and commercial partnerships within digital surgery and identify barriers and research goals for future practice. Developers and users of digital surgery must not only have an awareness of the ethical issues surrounding digital applications in healthcare, but also the ethical considerations unique to digital surgery. Future research into these issues must involve all digital surgery stakeholders including patients.

8.
Int J Comput Assist Radiol Surg ; 17(10): 1801-1811, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35635639

RESUMO

PURPOSE: Surgeons' skill in the operating room is a major determinant of patient outcomes. Assessment of surgeons' skill is necessary to improve patient outcomes and quality of care through surgical training and coaching. Methods for video-based assessment of surgical skill can provide objective and efficient tools for surgeons. Our work introduces a new method based on attention mechanisms and provides a comprehensive comparative analysis of state-of-the-art methods for video-based assessment of surgical skill in the operating room. METHODS: Using a dataset of 99 videos of capsulorhexis, a critical step in cataract surgery, we evaluated image feature-based methods and two deep learning methods to assess skill using RGB videos. In the first method, we predict instrument tips as keypoints and predict surgical skill using temporal convolutional neural networks. In the second method, we propose a frame-wise encoder (2D convolutional neural network) followed by a temporal model (recurrent neural network), both of which are augmented by visual attention mechanisms. We computed the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and predictive values through fivefold cross-validation. RESULTS: To classify a binary skill label (expert vs. novice), the range of AUC estimates was 0.49 (95% confidence interval; CI = 0.37 to 0.60) to 0.76 (95% CI = 0.66 to 0.85) for image feature-based methods. The sensitivity and specificity were consistently high for none of the methods. For the deep learning methods, the AUC was 0.79 (95% CI = 0.70 to 0.88) using keypoints alone, 0.78 (95% CI = 0.69 to 0.88) and 0.75 (95% CI = 0.65 to 0.85) with and without attention mechanisms, respectively. CONCLUSION: Deep learning methods are necessary for video-based assessment of surgical skill in the operating room. Attention mechanisms improved discrimination ability of the network. Our findings should be evaluated for external validity in other datasets.


Assuntos
Extração de Catarata , Oftalmologia , Cirurgiões , Capsulorrexe , Humanos , Redes Neurais de Computação
9.
IEEE Trans Med Robot Bionics ; 4(1): 28-37, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35368731

RESUMO

Conventional neuro-navigation can be challenged in targeting deep brain structures via transventricular neuroendoscopy due to unresolved geometric error following soft-tissue deformation. Current robot-assisted endoscopy techniques are fairly limited, primarily serving to planned trajectories and provide a stable scope holder. We report the implementation of a robot-assisted ventriculoscopy (RAV) system for 3D reconstruction, registration, and augmentation of the neuroendoscopic scene with intraoperative imaging, enabling guidance even in the presence of tissue deformation and providing visualization of structures beyond the endoscopic field-of-view. Phantom studies were performed to quantitatively evaluate image sampling requirements, registration accuracy, and computational runtime for two reconstruction methods and a variety of clinically relevant ventriculoscope trajectories. A median target registration error of 1.2 mm was achieved with an update rate of 2.34 frames per second, validating the RAV concept and motivating translation to future clinical studies.

10.
Facial Plast Surg Aesthet Med ; 24(6): 472-477, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35255228

RESUMO

Background: Surgeons must select cases whose complexity aligns with their skill set. Objectives: To determine how accurately trainees report involvement in procedures, judge case complexity, and assess their own skills. Methods: We recruited attendings and trainees from two otolaryngology departments. After performing septoplasty, they completed identical surveys regarding case complexity, achievement of goals, who performed which steps, and trainee skill using the septoplasty global assessment tool (SGAT) and visual analog scale (VAS). Agreement regarding which steps were performed by the trainee was assessed with Cohen's kappa coefficients (κ). Correlations between trainee and attending responses were measured with Spearman's correlation coefficients (rho). Results: Seven attendings and 42 trainees completed 181 paired surveys. Trainees and attendings sometimes disagreed about which steps were performed by trainees (range of κ = 0.743-0.846). Correlation between attending and trainee responses was low for VAS skill ratings (range of rho = 0.12-0.34), SGAT questions (range of rho = 0.03-0.53), and evaluation of case complexity (range of rho = 0.24-0.48). Conclusion: Trainees sometimes disagree with attendings about which septoplasty steps they perform and are limited in their ability to judge complexity, goals, and their skill.


Assuntos
Otolaringologia , Rinoplastia , Cirurgiões , Humanos , Salas Cirúrgicas , Competência Clínica
11.
Eur Urol Focus ; 8(2): 613-622, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-33941503

RESUMO

CONTEXT: As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them. OBJECTIVES: To provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee. EVIDENCE ACQUISITION: The project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement. EVIDENCE SYNTHESIS: There was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI. CONCLUSIONS: Using the Delphi methodology, we achieved international consensus among experts to develop and reach content validation for guidance on ethical implications of AI in surgical training. Providing an ethical foundation for launching narrow AI applications in surgical training. This guidance will require further validation. PATIENT SUMMARY: As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.In this paper we provide guidance on ethical implications of AI in surgical training.


Assuntos
Procedimentos Cirúrgicos Robóticos , Inteligência Artificial , Consenso , Técnica Delphi , Humanos , Reprodutibilidade dos Testes
12.
Med Image Anal ; 76: 102306, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34879287

RESUMO

Recent developments in data science in general and machine learning in particular have transformed the way experts envision the future of surgery. Surgical Data Science (SDS) is a new research field that aims to improve the quality of interventional healthcare through the capture, organization, analysis and modeling of data. While an increasing number of data-driven approaches and clinical applications have been studied in the fields of radiological and clinical data science, translational success stories are still lacking in surgery. In this publication, we shed light on the underlying reasons and provide a roadmap for future advances in the field. Based on an international workshop involving leading researchers in the field of SDS, we review current practice, key achievements and initiatives as well as available standards and tools for a number of topics relevant to the field, namely (1) infrastructure for data acquisition, storage and access in the presence of regulatory constraints, (2) data annotation and sharing and (3) data analytics. We further complement this technical perspective with (4) a review of currently available SDS products and the translational progress from academia and (5) a roadmap for faster clinical translation and exploitation of the full potential of SDS, based on an international multi-round Delphi process.


Assuntos
Ciência de Dados , Aprendizado de Máquina , Humanos
13.
J Med Imaging (Bellingham) ; 8(6): 065001, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34796250

RESUMO

Purpose: Surgery involves modifying anatomy to achieve a goal. Reconstructing anatomy can facilitate surgical care through surgical planning, real-time decision support, or anticipating outcomes. Tool motion is a rich source of data that can be used to quantify anatomy. Our work develops and validates a method for reconstructing the nasal septum from unstructured motion of the Cottle elevator during the elevation phase of septoplasty surgery, without need to explicitly delineate the surface of the septum. Approach: The proposed method uses iterative closest point registration to initially register a template septum to the tool motion. Subsequently, statistical shape modeling with iterative most likely oriented point registration is used to fit the reconstructed septum to Cottle tip position and orientation during flap elevation. Regularization of the shape model and transformation is incorporated. The proposed methods were validated on 10 septoplasty surgeries performed on cadavers by operators of varying experience level. Preoperative CT images of the cadaver septums were segmented as ground truth. Results: We estimated reconstruction error as the difference between the projections of the Cottle tip onto the surface of the reconstructed septum and the ground-truth septum segmented from the CT image. We found translational differences of 2.74 ( 2.06 - 2.81 ) mm and a rotational differences of 8.95 ( 7.11 - 10.55 ) deg between the reconstructed septum and the ground-truth septum [median (interquartile range)], given the optimal regularization parameters. Conclusions: Accurate reconstruction of the nasal septum can be achieved from tool tracking data during septoplasty surgery on cadavers. This enables understanding of the septal anatomy without need for traditional medical imaging. This result may be used to facilitate surgical planning, intraoperative care, or skills assessment.

14.
Surg Endosc ; 35(9): 4918-4929, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34231065

RESUMO

BACKGROUND: The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. METHODS: Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. RESULTS: After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. CONCLUSIONS: While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.


Assuntos
Aprendizado de Máquina , Consenso , Técnica Delphi , Humanos , Inquéritos e Questionários
15.
J Digit Imaging ; 34(1): 27-35, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33432446

RESUMO

Although much deep learning research has focused on mammographic detection of breast cancer, relatively little attention has been paid to mammography triage for radiologist review. The purpose of this study was to develop and test DeepCAT, a deep learning system for mammography triage based on suspicion of cancer. Specifically, we evaluate DeepCAT's ability to provide two augmentations to radiologists: (1) discarding images unlikely to have cancer from radiologist review and (2) prioritization of images likely to contain cancer. We used 1878 2D-mammographic images (CC & MLO) from the Digital Database for Screening Mammography to develop DeepCAT, a deep learning triage system composed of 2 components: (1) mammogram classifier cascade and (2) mass detector, which are combined to generate an overall priority score. This priority score is used to order images for radiologist review. Of 595 testing images, DeepCAT recommended low priority for 315 images (53%), of which none contained a malignant mass. In evaluation of prioritizing images according to likelihood of containing cancer, DeepCAT's study ordering required an average of 26 adjacent swaps to obtain perfect review order. Our results suggest that DeepCAT could substantially increase efficiency for breast imagers and effectively triage review of mammograms with malignant masses.


Assuntos
Neoplasias da Mama , Mamografia , Neoplasias da Mama/diagnóstico por imagem , Computadores , Detecção Precoce de Câncer , Feminino , Humanos , Triagem
16.
Mil Med ; 186(Suppl 1): 288-294, 2021 01 25.
Artigo em Inglês | MEDLINE | ID: mdl-33499518

RESUMO

INTRODUCTION: Short response time is critical for future military medical operations in austere settings or remote areas. Such effective patient care at the point of injury can greatly benefit from the integration of semi-autonomous robotic systems. To achieve autonomy, robots would require massive libraries of maneuvers collected with the goal of training machine learning algorithms. Although this is attainable in controlled settings, obtaining surgical data in austere settings can be difficult. Hence, in this article, we present the Dexterous Surgical Skill (DESK) database for knowledge transfer between robots. The peg transfer task was selected as it is one of the six main tasks of laparoscopic training. In addition, we provide a machine learning framework to evaluate novel transfer learning methodologies on this database. METHODS: A set of surgical gestures was collected for a peg transfer task, composed of seven atomic maneuvers referred to as surgemes. The collected Dexterous Surgical Skill dataset comprises a set of surgical robotic skills using the four robotic platforms: Taurus II, simulated Taurus II, YuMi, and the da Vinci Research Kit. Then, we explored two different learning scenarios: no-transfer and domain-transfer. In the no-transfer scenario, the training and testing data were obtained from the same domain; whereas in the domain-transfer scenario, the training data are a blend of simulated and real robot data, which are tested on a real robot. RESULTS: Using simulation data to train the learning algorithms enhances the performance on the real robot where limited or no real data are available. The transfer model showed an accuracy of 81% for the YuMi robot when the ratio of real-tosimulated data were 22% to 78%. For the Taurus II and the da Vinci, the model showed an accuracy of 97.5% and 93%, respectively, training only with simulation data. CONCLUSIONS: The results indicate that simulation can be used to augment training data to enhance the performance of learned models in real scenarios. This shows potential for the future use of surgical data from the operating room in deployable surgical robots in remote areas.


Assuntos
Robótica , Competência Clínica , Simulação por Computador , Humanos , Laparoscopia , Aprendizado de Máquina
17.
Sci Rep ; 10(1): 22208, 2020 12 17.
Artigo em Inglês | MEDLINE | ID: mdl-33335191

RESUMO

AI is becoming ubiquitous, revolutionizing many aspects of our lives. In surgery, it is still a promise. AI has the potential to improve surgeon performance and impact patient care, from post-operative debrief to real-time decision support. But, how much data is needed by an AI-based system to learn surgical context with high fidelity? To answer this question, we leveraged a large-scale, diverse, cholecystectomy video dataset. We assessed surgical workflow recognition and report a deep learning system, that not only detects surgical phases, but does so with high accuracy and is able to generalize to new settings and unseen medical centers. Our findings provide a solid foundation for translating AI applications from research to practice, ushering in a new era of surgical intelligence.

18.
Int J Comput Assist Radiol Surg ; 15(7): 1187-1194, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-32385598

RESUMO

PURPOSE: Current virtual reality-based (VR) simulators for robot-assisted minimally invasive surgery (RAMIS) training lack effective teaching and coaching. Our objective was to develop an automated teaching framework for VR training in RAMIS. Second, we wanted to study the effect of such real-time teaching cues on surgical technical skill acquisition. Third, we wanted to assess skill in terms of surgical technique in addition to traditional time and motion efficiency metrics. METHODS: We implemented six teaching cues within a needle passing task on the da Vinci Skills Simulator platform (noncommercial research version). These teaching cues are graphical overlays designed to demonstrate ideal surgical technique, e.g., what path to follow while passing needle through tissue. We created three coaching modes: TEACH (continuous demonstration), METRICS (demonstration triggered by performance metrics), and USER (demonstration upon user request). We conducted a randomized controlled trial where the experimental group practiced using automated teaching and the control group practiced in a self-learning manner without automated teaching. RESULTS: We analyzed data from 30 participants (14 in experimental and 16 in control group). After three practice repetitions, control group showed higher improvement in time and motion efficiency, while experimental group showed higher improvement in surgical technique compared to their baseline measurements. The experimental group showed more improvement than the control group on a surgical technique metric (at what angle is needle grasped by an instrument), and the difference between groups was statistically significant. CONCLUSION: In a pilot randomized controlled trial, we observed that automated teaching cues can improve the performance of surgical technique in a VR simulator for RAMIS needle passing. Our study was limited by its recruitment of nonsurgeons and evaluation of a single configuration of coaching modes.


Assuntos
Competência Clínica , Simulação por Computador , Procedimentos Cirúrgicos Minimamente Invasivos/educação , Procedimentos Cirúrgicos Robóticos/educação , Treinamento por Simulação , Realidade Virtual , Sinais (Psicologia) , Humanos , Agulhas , Interface Usuário-Computador
19.
Int J Comput Assist Radiol Surg ; 15(8): 1369-1377, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32430693

RESUMO

PURPOSE: This paper introduces the concept of using an additional intracorporeal camera for the specific goal of training and skill assessment and explores the benefits of such an approach. This additional camera can provide an additional view of the surgical scene, and we hypothesize that this additional view would improve surgical training and skill assessment in robot-assisted surgery. METHODS: We developed a multi-camera, multi-view system, and we conducted two user studies ([Formula: see text]) to evaluate its effectiveness for training and skill assessment. In the training user study, subjects were divided into two groups: a single-view group and a dual-view group. The skill assessment study was a within-subject study, in which every subject was shown single- and dual view recorded videos of a surgical training task, and the goal was to count the number of errors committed in each video. RESULTS: The results show the effectiveness of using an additional intracorporeal camera view for training and skill assessment. The benefits of this view are modest for skill assessment as it improves the assessment accuracy by approximately 9%. For training, the additional camera view is clearly more effective. Indeed, the dual-view group is 57% more accurate than the single-view group in a retention test. In addition, the dual-view group is 35% more accurate and 25% faster than the single-view group in a transfer test. CONCLUSION: A multi-camera, multi-view system has the potential to significantly improve training and moderately improve skill assessment in robot-assisted surgery. One application of our work is to include an additional camera view in existing virtual reality surgical training simulators to realize its benefits in training. The views from the additional intracorporeal camera can also be used to improve on existing surgical skill assessment criteria used in training systems for robot-assisted surgery.


Assuntos
Competência Clínica , Procedimentos Cirúrgicos Robóticos , Humanos , Realidade Virtual
20.
Knee ; 27(2): 535-542, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31883760

RESUMO

BACKGROUND: Preoperative identification of knee arthroplasty is important for planning revision surgery. However, up to 10% of implants are not identified prior to surgery. The purposes of this study were to develop and test the performance of a deep learning system (DLS) for the automated radiographic 1) identification of the presence or absence of a total knee arthroplasty (TKA); 2) classification of TKA vs. unicompartmental knee arthroplasty (UKA); and 3) differentiation between two different primary TKA models. METHOD: We collected 237 anteroposterior (AP) knee radiographs with equal proportions of native knees, TKA, and UKA and 274 AP knee radiographs with equal proportions of two TKA models. Data augmentation was used to increase the number of images for deep convolutional neural network (DCNN) training. A DLS based on DCNNs was trained on these images. Receiver operating characteristic (ROC) curves with area under the curve (AUC) were generated. Heatmaps were created using class activation mapping (CAM) to identify image features most important for DCNN decision-making. RESULTS: DCNNs trained to detect TKA and distinguish between TKA and UKA both achieved AUC of 1. Heatmaps demonstrated appropriate emphasis of arthroplasty components in decision-making. The DCNN trained to distinguish between the two TKA models achieved AUC of 1. Heatmaps showed emphasis of specific unique features of the TKA model designs, such as the femoral component anterior flange shape. CONCLUSIONS: DCNNs can accurately identify presence of TKA and distinguish between specific arthroplasty designs. This proof-of-concept could be applied towards identifying other prosthesis models and prosthesis-related complications.


Assuntos
Artroplastia do Joelho/classificação , Técnicas de Apoio para a Decisão , Aprendizado Profundo , Articulação do Joelho/cirurgia , Osteoartrite do Joelho/cirurgia , Idoso , Artroplastia do Joelho/métodos , Feminino , Humanos , Articulação do Joelho/diagnóstico por imagem , Masculino , Pessoa de Meia-Idade , Osteoartrite do Joelho/classificação , Osteoartrite do Joelho/diagnóstico , Radiografia , Reoperação , Resultado do Tratamento
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA