Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Med Image Anal ; 86: 102770, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36889206

RESUMO

PURPOSE: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. METHODS: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. RESULTS: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). CONCLUSION: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery.


Assuntos
Inteligência Artificial , Benchmarking , Humanos , Fluxo de Trabalho , Algoritmos , Aprendizado de Máquina
2.
J Digit Imaging ; 36(1): 153-163, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36271210

RESUMO

We have developed an MRI-safe needle guidance toolkit for MRI-guided interventions intended to enable accurate positioning for needle-based procedures. The toolkit allows intuitive and accurate needle angulation and entry point positioning according to an MRI-based plan, using a flexible, patterned silicone 2D grid. The toolkit automatically matches the grid on MRI planning images with a physical silicon grid placed conformally on the patient's skin and provides the Interventional Radiologist an easy-to-use guide showing the needle entry point on the silicon grid as well as needle angle information. The radiologist can use this guide along with a 2-degree-of-freedom (rotation and angulation relative to the entry point) hand-held needle guide to place the needle into the anatomy of interest. The initial application that we are considering for this toolkit is arthrography, a diagnostic procedure to evaluate the joint space condition. However, this toolkit could be used for any needle-based and percutaneous procedures such as MRI-guided biopsy and facet joint injection. For matching the images, we adopt a transformation parameter estimation technique using the phase-only correlation method in the frequency domain. We investigated the robustness of this method against rotation, displacement, and Rician noise. The algorithm was able to successfully match all the dataset images. We also investigated the accuracy of identifying the entry point from registered template images as a prerequisite for a future targeting study. Application of the template matching algorithm to locate the needle entry points within the MRI dataset resulted in an average entry point location estimation accuracy of 0.12 ±0.2 mm. This promising result motivates a more detailed assessment of this algorithm in the future including a targeting study on a silicon phantom with embedded plastic targets to investigate the end-to-end accuracy of this automatic template matching algorithm in the interventional MRI room.


Assuntos
Imageamento por Ressonância Magnética , Silício , Humanos , Imageamento por Ressonância Magnética/métodos , Agulhas , Algoritmos , Biópsia Guiada por Imagem/métodos , Imagens de Fantasmas
3.
Healthc Technol Lett ; 6(6): 231-236, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32038863

RESUMO

Surgical tool tracking has a variety of applications in different surgical scenarios. Electromagnetic (EM) tracking can be utilised for tool tracking, but the accuracy is often limited by magnetic interference. Vision-based methods have also been suggested; however, tracking robustness is limited by specular reflection, occlusions, and blurriness observed in the endoscopic image. Recently, deep learning-based methods have shown competitive performance on segmentation and tracking of surgical tools. The main bottleneck of these methods lies in acquiring a sufficient amount of pixel-wise, annotated training data, which demands substantial labour costs. To tackle this issue, the authors propose a weakly supervised method for surgical tool segmentation and tracking based on hybrid sensor systems. They first generate semantic labellings using EM tracking and laparoscopic image processing concurrently. They then train a light-weight deep segmentation network to obtain a binary segmentation mask that enables tool tracking. To the authors' knowledge, the proposed method is the first to integrate EM tracking and laparoscopic image processing for generation of training labels. They demonstrate that their framework achieves accurate, automatic tool segmentation (i.e. without any manual labelling of the surgical tool to be tracked) and robust tool tracking in laparoscopic image sequences.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA