Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Ann Surg ; 280(1): 13-20, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38390732

RESUMO

OBJECTIVE: Develop a pioneer surgical anonymization algorithm for reliable and accurate real-time removal of out-of-body images validated across various robotic platforms. BACKGROUND: The use of surgical video data has become a common practice in enhancing research and training. Video sharing requires complete anonymization, which, in the case of endoscopic surgery, entails the removal of all nonsurgical video frames where the endoscope can record the patient or operating room staff. To date, no openly available algorithmic solution for surgical anonymization offers reliable real-time anonymization for video streaming, which is also robotic-platform and procedure-independent. METHODS: A data set of 63 surgical videos of 6 procedures performed on four robotic systems was annotated for out-of-body sequences. The resulting 496.828 images were used to develop a deep learning algorithm that automatically detected out-of-body frames. Our solution was subsequently benchmarked against existing anonymization methods. In addition, we offer a postprocessing step to enhance the performance and test a low-cost setup for real-time anonymization during live surgery streaming. RESULTS: Framewise anonymization yielded a receiver operating characteristic area under the curve score of 99.46% on unseen procedures, increasing to 99.89% after postprocessing. Our Robotic Anonymization Network outperforms previous state-of-the-art algorithms, even on unseen procedural types, despite the fact that alternative solutions are explicitly trained using these procedures. CONCLUSIONS: Our deep learning model, Robotic Anonymization Network, offers reliable, accurate, and safe real-time anonymization during complex and lengthy surgical procedures regardless of the robotic platform. The model can be used in real time for surgical live streaming and is openly available.


Assuntos
Algoritmos , Procedimentos Cirúrgicos Robóticos , Humanos , Anonimização de Dados , Gravação em Vídeo , Aprendizado Profundo
2.
Surg Endosc ; 36(11): 8533-8548, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35941310

RESUMO

BACKGROUND: Artificial intelligence (AI) holds tremendous potential to reduce surgical risks and improve surgical assessment. Machine learning, a subfield of AI, can be used to analyze surgical video and imaging data. Manual annotations provide veracity about the desired target features. Yet, methodological annotation explorations are limited to date. Here, we provide an exploratory analysis of the requirements and methods of instrument annotation in a multi-institutional team from two specialized AI centers and compile our lessons learned. METHODS: We developed a bottom-up approach for team annotation of robotic instruments in robot-assisted partial nephrectomy (RAPN), which was subsequently validated in robot-assisted minimally invasive esophagectomy (RAMIE). Furthermore, instrument annotation methods were evaluated for their use in Machine Learning algorithms. Overall, we evaluated the efficiency and transferability of the proposed team approach and quantified performance metrics (e.g., time per frame required for each annotation modality) between RAPN and RAMIE. RESULTS: We found a 0.05 Hz image sampling frequency to be adequate for instrument annotation. The bottom-up approach in annotation training and management resulted in accurate annotations and demonstrated efficiency in annotating large datasets. The proposed annotation methodology was transferrable between both RAPN and RAMIE. The average annotation time for RAPN pixel annotation ranged from 4.49 to 12.6 min per image; for vector annotation, we denote 2.92 min per image. Similar annotation times were found for RAMIE. Lastly, we elaborate on common pitfalls encountered throughout the annotation process. CONCLUSIONS: We propose a successful bottom-up approach for annotator team composition, applicable to any surgical annotation project. Our results set the foundation to start AI projects for instrument detection, segmentation, and pose estimation. Due to the immense annotation burden resulting from spatial instrumental annotation, further analysis into sampling frequency and annotation detail needs to be conducted.


Assuntos
Laparoscopia , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Inteligência Artificial , Nefrectomia/métodos
3.
Eur Urol ; 84(1): 86-91, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36941148

RESUMO

Several barriers prevent the integration and adoption of augmented reality (AR) in robotic renal surgery despite the increased availability of virtual three-dimensional (3D) models. Apart from correct model alignment and deformation, not all instruments are clearly visible in AR. Superimposition of a 3D model on top of the surgical stream, including the instruments, can result in a potentially hazardous surgical situation. We demonstrate real-time instrument detection during AR-guided robot-assisted partial nephrectomy and show the generalization of our algorithm to AR-guided robot-assisted kidney transplantation. We developed an algorithm using deep learning networks to detect all nonorganic items. This algorithm learned to extract this information for 65 927 manually labeled instruments on 15 100 frames. Our setup, which runs on a standalone laptop, was deployed in three different hospitals and used by four different surgeons. Instrument detection is a simple and feasible way to enhance the safety of AR-guided surgery. Future investigations should strive to optimize efficient video processing to minimize the 0.5-s delay currently experienced. General AR applications also need further optimization, including detection and tracking of organ deformation, for full clinical implementation.


Assuntos
Realidade Aumentada , Aprendizado Profundo , Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgia Assistida por Computador , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Imageamento Tridimensional/métodos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa