Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38771418

RESUMEN

PURPOSE: Intraoperative reconstruction of endoscopic scenes is a key technology for surgical navigation systems. The accuracy and efficiency of 3D reconstruction directly determine the effectiveness of navigation systems in a variety of clinical applications. While current deformable SLAM algorithms can meet real-time requirements, their underlying reliance on regular templates still makes it challenging to efficiently capture abrupt geometric features within scenes, such as organ contours and surgical margins. METHODS: We propose a novel real-time monocular deformable SLAM algorithm with geometrically adapted template. To ensure real-time performance, the proposed algorithm consists of two threads: a deformation mapping thread updates the template at keyframe rate and a deformation tracking thread estimates the camera pose and the deformation at frame rate. To capture geometric features more efficiently, the algorithm first detects salient edge features using a pre-trained contour detection network and then constructs the template through a triangulation method with guidance of the salient features. RESULTS: We thoroughly evaluated this method on Mandala and Hamlyn datasets in terms of accuracy and performance. The results demonstrated that the proposed method achieves better accuracy with 0.75-7.95% improvement and achieves consistent effectiveness in data association compared with the closest method. CONCLUSION: This study verified an adaptive template does improve the performance of reconstruction of dynamic laparoscopic Scenes with abrupt geometric features. However, further exploration is needed for applications in laparoscopic surgery with incisal margins caused by surgical instruments. This research serves as a crucial step toward enhanced automatic computer-assisted navigation in laparoscopic surgery. Code is available at https://github.com/Tang257/SLAM-with-geometrically-adapted-template .

2.
BMC Med Imaging ; 23(1): 91, 2023 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-37422639

RESUMEN

PURPOSE: Segmentation of liver vessels from CT images is indispensable prior to surgical planning and aroused a broad range of interest in the medical image analysis community. Due to the complex structure and low-contrast background, automatic liver vessel segmentation remains particularly challenging. Most of the related researches adopt FCN, U-net, and V-net variants as a backbone. However, these methods mainly focus on capturing multi-scale local features which may produce misclassified voxels due to the convolutional operator's limited locality reception field. METHODS: We propose a robust end-to-end vessel segmentation network called Inductive BIased Multi-Head Attention Vessel Net(IBIMHAV-Net) by expanding swin transformer to 3D and employing an effective combination of convolution and self-attention. In practice, we introduce voxel-wise embedding rather than patch-wise embedding to locate precise liver vessel voxels and adopt multi-scale convolutional operators to gain local spatial information. On the other hand, we propose the inductive biased multi-head self-attention which learns inductively biased relative positional embedding from initialized absolute position embedding. Based on this, we can gain more reliable queries and key matrices. RESULTS: We conducted experiments on the 3DIRCADb dataset. The average dice and sensitivity of the four tested cases were 74.8[Formula: see text] and 77.5[Formula: see text], which exceed the results of existing deep learning methods and improved graph cuts method. The Branches Detected(BD)/Tree-length Detected(TD) indexes also proved the global/local feature capture ability better than other methods. CONCLUSION: The proposed model IBIMHAV-Net provides an automatic, accurate 3D liver vessel segmentation with an interleaved architecture that better utilizes both global and local spatial features in CT volumes. It can be further extended for other clinical data.


Asunto(s)
Cabeza , Hígado , Humanos , Hígado/diagnóstico por imagen , Atención , Procesamiento de Imagen Asistido por Computador/métodos
3.
Int J Surg ; 109(4): 821-828, 2023 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-37026828

RESUMEN

BACKGROUND: Indocyanine green (ICG) fluorescence imaging is a new technology that can improve the real-time location of tumor edges and small nodules during surgery. However, no study has investigated its application in laparoscopic insulinoma enucleation. This study aimed to evaluate the feasibility and accuracy of this method for intraoperative localization of insulinomas and margin assessment during laparoscopic insulinoma enucleation. MATERIALS AND METHODS: Eight patients who underwent laparoscopic insulinoma enucleation from October 2016 to June 2022 were enrolled. Two methods of ICG administration, ICG dynamic perfusion and three-dimensional (3D) demarcation staining, were utilized in the laparoscopic insulinoma enucleation. Tumor-to-background ratio (TBR) and histopathologic analysis were used to evaluate the feasibility and accuracy of these novel navigation methods in laparoscopic insulinoma enucleation. RESULTS: All eight enrolled patients underwent both ICG dynamic perfusion and 3D demarcation staining. ICG dynamic perfusion images were available for six of them, among which five tumors could be recognized by TBR (largest TBR in each case 4.42±2.76), while the other could be distinguished by the disordered blood vessels in the tumor area. Seven out of eight specimens had successful 3D demarcation staining (TBR 7.62±2.62). All wound bed margins had negative frozen sections and final histopathologic diagnoses. CONCLUSIONS: ICG dynamic perfusion may be helpful in observing the abnormal vascular perfusion of tumors, providing similar functionality to intraoperative real-time angiography. ICG injection under the tumor pseudocapsule may be a useful method for acquiring real-time, 3D demarcation for the resection of insulinoma.


Asunto(s)
Insulinoma , Laparoscopía , Neoplasias Pancreáticas , Humanos , Verde de Indocianina , Insulinoma/diagnóstico por imagen , Insulinoma/cirugía , Estudios Retrospectivos , Estudios de Cohortes , Neoplasias Pancreáticas/diagnóstico por imagen , Neoplasias Pancreáticas/cirugía , Laparoscopía/métodos , Imagen Óptica/métodos
4.
Vis Comput Ind Biomed Art ; 5(1): 15, 2022 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-35668216

RESUMEN

Deep simulations have gained widespread attention owing to their excellent acceleration performances. However, these methods cannot provide effective collision detection and response strategies. We propose a deep interactive physical simulation framework that can effectively address tool-object collisions. The framework can predict the dynamic information by considering the collision state. In particular, the graph neural network is chosen as the base model, and a collision-aware recursive regression module is introduced to update the network parameters recursively using interpenetration distances calculated from the vertex-face and edge-edge tests. Additionally, a novel self-supervised collision term is introduced to provide a more compact collision response. This study extensively evaluates the proposed method and shows that it effectively reduces interpenetration artifacts while ensuring high simulation efficiency.

5.
Comput Methods Programs Biomed ; 219: 106749, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35334344

RESUMEN

BACKGROUND AND OBJECTIVES: Soft body cutting simulation is the core module of virtual surgical training systems. By making full use of the powerful computing resources of modern computers, the existing methods have already met the needs of real-time interaction. However, there is still a lack of high realism. The main reason is that most current methods follows the "Intersection-IS-Fracture" mode, namely cutting fracture occurs as long as the cutting blade intersects with the object. To model real-life cutting phenomenon considering deformable objects' fracture resistance, this paper presents a highly realistic virtual cutting simulation algorithm by introducing an energy-based cutting fracture evolution model. METHODS: We design the framework based on the co-rotational linear FEM model to support large deformations of soft objects and also adopt the composite finite element method (CFEM) to balance between simulation accuracy and efficiency. Then, a cutting plane constrained Griffth's energy minimization scheme is proposed to determine when and how to generate a new cut. Moreover, to provide the contact effect before the fracture occurs, we design a material-aware adaptation scheme that can guarantee indentation consistent with the cutting tool blade and visually plausible indentation-induced deformation to avoiding large computational effort. RESULTS AND CONCLUSION: The experimental results demonstrate that the proposed algorithm is feasible for generating highly realistic cutting simulation results of different objects with various materials and geometrical characteristics while introducing a negligible computational cost. Besides, for different blade shapes, the proposed algorithm can produce highly consistent indentation and fracture. Qualitative evaluation and performance analysis indicate the versatility of the proposed algorithm.


Asunto(s)
Algoritmos , Interfaz Usuario-Computador , Simulación por Computador , Modelos Lineales
6.
Comput Med Imaging Graph ; 85: 101785, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32898732

RESUMEN

The accurate whole heart segmentation (WHS) of multi-modality medical images including magnetic resonance image (MRI) and computed tomography (CT) plays an important role in many clinical applications, such as accurate preoperative diagnosis planning and intraoperative treatment. Considering that the shape information of each component of the whole heart is complementary, we can extract multi-modality features and obtain the final segmentation results by fusing MRI and CT images. In this paper, we proposed a multi-modality transfer learning network with adversarial training (MMTLNet) for 3D multi-modality whole heart segmentation. Firstly, the network transfers the source domain (MRI domain) to the target domain (CT domain) by reconstructing the MRI images with a generator network and optimizing the reconstructed MRI images with a discriminator network, which enables us to fuse the MRI images with CT images to fully utilize the useful information from images in multi-modality for segmentation task. Secondly, to retain the useful information and remove the redundant information for accurate segmentation, we introduce the spatial attention mechanism into the backbone connection of UNet network to optimize the feature extraction between layers, and add channel attention mechanism at the jump connection to optimize the information extracted from the low-level feature map. Thirdly, we propose a new loss function in the adversarial training by introducing a weighted coefficient to distribute the proportion between Dice coefficient loss and generator loss, which can not only ensure the images to be correctly transferred from MRI domain to CT domain, but also achieve accurate segmentation with the transferred domain. We extensively evaluated our method on the data set of the multi-modality whole heart segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The dice values of whole heart segmentation are 0.914 (CT images) and 0.890 (MRI images), which are both higher than the state-of-the-art.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Aprendizaje Automático , Tomografía Computarizada por Rayos X
7.
Front Genet ; 10: 1110, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31827487

RESUMEN

It is a challenge to automatically and accurately segment the liver and tumors in computed tomography (CT) images, as the problem of over-segmentation or under-segmentation often appears when the Hounsfield unit (Hu) of liver and tumors is close to the Hu of other tissues or background. In this paper, we propose the spatial channel-wise convolution, a convolutional operation along the direction of the channel of feature maps, to extract mapping relationship of spatial information between pixels, which facilitates learning the mapping relationship between pixels in the feature maps and distinguishing the tumors from the liver tissue. In addition, we put forward an iterative extending learning strategy, which optimizes the mapping relationship of spatial information between pixels at different scales and enables spatial channel-wise convolution to map the spatial information between pixels in high-level feature maps. Finally, we propose an end-to-end convolutional neural network called Channel-UNet, which takes UNet as the main structure of the network and adds spatial channel-wise convolution in each up-sampling and down-sampling module. The network can converge the optimized mapping relationship of spatial information between pixels extracted by spatial channel-wise convolution and information extracted by feature maps and realizes multi-scale information fusion. The proposed ChannelUNet is validated by the segmentation task on the 3Dircadb dataset. The Dice values of liver and tumors segmentation were 0.984 and 0.940, which is slightly superior to current best performance. Besides, compared with the current best method, the number of parameters of our method reduces by 25.7%, and the training time of our method reduces by 33.3%. The experimental results demonstrate the efficiency and high accuracy of Channel-UNet in liver and tumors segmentation in CT images.

8.
Vis Comput Ind Biomed Art ; 2(1): 6, 2019 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-32240415

RESUMEN

This paper presents a novel augmented reality (AR)-based neurosurgical training simulator which provides a very natural way for surgeons to learn neurosurgical skills. Surgical simulation with bimanual haptic interaction is integrated in this work to provide a simulated environment for users to achieve holographic guidance for pre-operative training. To achieve the AR guidance, the simulator should precisely overlay the 3D anatomical information of the hidden target organs in the patients in real surgery. In this regard, the patient-specific anatomy structures are reconstructed from segmented brain magnetic resonance imaging. We propose a registration method for precise mapping of the virtual and real information. In addition, the simulator provides bimanual haptic interaction in a holographic environment to mimic real brain tumor resection. In this study, we conduct AR-based guidance validation and a user study on the developed simulator, which demonstrate the high accuracy of our AR-based neurosurgery simulator, as well as the AR guidance mode's potential to improve neurosurgery by simplifying the operation, reducing the difficulty of the operation, shortening the operation time, and increasing the precision of the operation.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...