Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Int J Surg ; 109(10): 2962-2974, 2023 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-37526099

RESUMO

BACKGROUND: Lack of anatomy recognition represents a clinically relevant risk in abdominal surgery. Machine learning (ML) methods can help identify visible patterns and risk structures; however, their practical value remains largely unclear. MATERIALS AND METHODS: Based on a novel dataset of 13 195 laparoscopic images with pixel-wise segmentations of 11 anatomical structures, we developed specialized segmentation models for each structure and combined models for all anatomical structures using two state-of-the-art model architectures (DeepLabv3 and SegFormer) and compared segmentation performance of algorithms to a cohort of 28 physicians, medical students, and medical laypersons using the example of pancreas segmentation. RESULTS: Mean Intersection-over-Union for semantic segmentation of intra-abdominal structures ranged from 0.28 to 0.83 and from 0.23 to 0.77 for the DeepLabv3-based structure-specific and combined models, and from 0.31 to 0.85 and from 0.26 to 0.67 for the SegFormer-based structure-specific and combined models, respectively. Both the structure-specific and the combined DeepLabv3-based models are capable of near-real-time operation, while the SegFormer-based models are not. All four models outperformed at least 26 out of 28 human participants in pancreas segmentation. CONCLUSIONS: These results demonstrate that ML methods have the potential to provide relevant assistance in anatomy recognition in minimally invasive surgery in near-real-time. Future research should investigate the educational value and subsequent clinical impact of the respective assistance systems.


Assuntos
Laparoscopia , Aprendizado de Máquina , Humanos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
2.
Eur J Surg Oncol ; : 106996, 2023 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-37591704

RESUMO

INTRODUCTION: Complex oncological procedures pose various surgical challenges including dissection in distinct tissue planes and preservation of vulnerable anatomical structures throughout different surgical phases. In rectal surgery, violation of dissection planes increases the risk of local recurrence and autonomous nerve damage resulting in incontinence and sexual dysfunction. This work explores the feasibility of phase recognition and target structure segmentation in robot-assisted rectal resection (RARR) using machine learning. MATERIALS AND METHODS: A total of 57 RARR were recorded and subsets of these were annotated with respect to surgical phases and exact locations of target structures (anatomical structures, tissue types, static structures, and dissection areas). For surgical phase recognition, three machine learning models were trained: LSTM, MSTCN, and Trans-SVNet. Based on pixel-wise annotations of target structures in 9037 images, individual segmentation models based on DeepLabv3 were trained. Model performance was evaluated using F1 score, Intersection-over-Union (IoU), accuracy, precision, recall, and specificity. RESULTS: The best results for phase recognition were achieved with the MSTCN model (F1 score: 0.82 ± 0.01, accuracy: 0.84 ± 0.03). Mean IoUs for target structure segmentation ranged from 0.14 ± 0.22 to 0.80 ± 0.14 for organs and tissue types and from 0.11 ± 0.11 to 0.44 ± 0.30 for dissection areas. Image quality, distorting factors (i.e. blood, smoke), and technical challenges (i.e. lack of depth perception) considerably impacted segmentation performance. CONCLUSION: Machine learning-based phase recognition and segmentation of selected target structures are feasible in RARR. In the future, such functionalities could be integrated into a context-aware surgical guidance system for rectal surgery.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...