Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Int J Med Robot ; : e2595, 2023 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-37932905

RESUMEN

BACKGROUND: In robot-assisted surgery, automatic segmentation of surgical instrument images is crucial for surgical safety. The proposed method addresses challenges in the craniotomy environment, such as occlusion and illumination, through an efficient surgical instrument segmentation network. METHODS: The network uses YOLOv8 as the target detection framework and integrates a semantic segmentation head to achieve detection and segmentation capabilities. A concatenation of multi-channel feature maps is designed to enhance model generalisation by fusing deep and shallow features. The innovative GBC2f module ensures the lightweight of the network and the ability to capture global information. RESULTS: Experimental validation of the intracranial glioma surgical instrument dataset shows excellent performance: 94.9% MPA score, 89.9% MIoU value, and 126.6 FPS. CONCLUSIONS: According to the experimental results, the segmentation model proposed in this study has significant advantages over other state-of-the-art models. This provides a valuable reference for the further development of intelligent surgical robots.

2.
Comput Biol Med ; 166: 107565, 2023 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-37839219

RESUMEN

In robot-assisted surgery, precise surgical instrument segmentation technology can provide accurate location and pose data for surgeons, helping them perform a series of surgical operations efficiently and safely. However, there are still some interfering factors, such as surgical instruments being covered by tissue, multiple surgical instruments interlacing with each other, and instrument shaking during surgery. To better address these issues, an effective surgical instrument segmentation network called InstrumentNet is proposed, which adopts YOLOv7 as the object detection framework to achieve a real-time detection solution. Specifically, a multiscale feature fusion network is constructed, which aims to avoid problems such as feature redundancy and feature loss and enhance the generalization ability. Furthermore, an adaptive feature-weighted fusion mechanism is introduced to regulate network learning and convergence. Finally, a semantic segmentation head is introduced to integrate the detection and segmentation functions, and a multitask learning loss function is specifically designed to optimize the surgical instrument segmentation performance. The proposed segmentation model is validated on a dataset of intracranial surgical instruments provided by seven experts from Beijing Tiantan Hospital and achieved an mAP score of 93.5 %, Dice score of 82.49 %, and MIoU score of 85.48 %, demonstrating its universality and superiority. The experimental results demonstrate that the proposed model achieves good segmentation performance on surgical instruments compared to other advanced models and can provide a reference for developing intelligent medical robots.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...