Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 76
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38837936

RESUMEN

Medical image segmentation and registration are two fundamental and highly related tasks. However, current works focus on the mutual promotion between the two at the loss function level, ignoring the feature information generated by the encoder-decoder network during the task-specific feature mapping process and the potential inter-task feature relationship. This paper proposes a unified multi-task joint learning framework based on bi-fusion of structure and deformation at multi-scale, called BFM-Net, which simultaneously achieves the segmentation results and deformation field in a single-step estimation. BFM-Net consists of a segmentation subnetwork (SegNet), a registration subnetwork (RegNet), and the multi-task connection module (MTC). The MTC module is used to transfer the latent feature representation between segmentation and registration at multi-scale and link different tasks at the network architecture level, including the spatial attention fusion module (SAF), the multi-scale spatial attention fusion module (MSAF) and the velocity field fusion module (VFF). Extensive experiments on MR, CT and ultrasound images demonstrate the effectiveness of our approach. The MTC module can increase the Dice scores of segmentation and registration by 3.2%, 1.6%, 2.2%, and 6.2%, 4.5%, 3.0%, respectively. Compared with six state-of-the-art algorithms for segmentation and registration, BFM-Net can achieve superior performance in various modal images, fully demonstrating its effectiveness and generalization.

2.
IEEE Trans Med Imaging ; PP2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38805326

RESUMEN

Accurately reconstructing 4D critical organs contributes to the visual guidance in X-ray image-guided interventional operation. Current methods estimate intraoperative dynamic meshes by refining a static initial organ mesh from the semantic information in the single-frame X-ray images. However, these methods fall short of reconstructing an accurate and smooth organ sequence due to the distinct respiratory patterns between the initial mesh and X-ray image. To overcome this limitation, we propose a novel dual-stage complementary 4D organ reconstruction (DSC-Recon) model for recovering dynamic organ meshes by utilizing the preoperative and intraoperative data with different respiratory patterns. DSC-Recon is structured as a dual-stage framework: 1) The first stage focuses on addressing a flexible interpolation network applicable to multiple respiratory patterns, which could generate dynamic shape sequences between any pair of preoperative 3D meshes segmented from CT scans. 2) In the second stage, we present a deformation network to take the generated dynamic shape sequence as the initial prior and explore the discriminate feature (i.e., target organ areas and meaningful motion information) in the intraoperative X-ray images, predicting the deformed mesh by introducing a designed feature mapping pipeline integrated into the initialized shape refinement process. Experiments on simulated and clinical datasets demonstrate the superiority of our method over state-of-the-art methods in both quantitative and qualitative aspects.

3.
Comput Methods Programs Biomed ; 248: 108108, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38461712

RESUMEN

BACKGROUND: The existing face matching method requires a point cloud to be drawn on the real face for registration, which results in low registration accuracy due to the irregular deformation of the patient's skin that makes the point cloud have many outlier points. METHODS: This work proposes a non-contact pose estimation method based on similarity aspect graph hierarchical optimization. The proposed method constructs a distance-weighted and triangular-constrained similarity measure to describe the similarity between views by automatically identifying the 2D and 3D feature points of the face. A mutual similarity clustering method is proposed to construct a hierarchical aspect graph with 3D pose as nodes. A Monte Carlo tree search strategy is used to search the hierarchical aspect graph for determining the optimal pose of the facial 3D model, so as to realize the accurate registration of the facial 3D model and the real face. RESULTS: The proposed method was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with four advanced pose calibration methods. The proposed method obtained average fusion errors of 1.13 ± 0.20 mm and 0.92 ± 0.08 mm in head phantom and volunteer experiments, respectively, which exhibits the best fusion performance among all comparison methods. CONCLUSIONS: Our experiments proved the effectiveness of the proposed pose estimation method in facial augmented reality.


Asunto(s)
Algoritmos , Realidad Aumentada , Humanos , Imagenología Tridimensional/métodos
4.
Comput Biol Med ; 171: 108176, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38401453

RESUMEN

The segmentation of the orbit in computed tomography (CT) images plays a crucial role in facilitating the quantitative analysis of orbital decompression surgery for patients with Thyroid-associated Ophthalmopathy (TAO). However, the task of orbit segmentation, particularly in postoperative images, remains challenging due to the significant shape variation and limited amount of labeled data. In this paper, we present a two-stage semi-supervised framework for the automatic segmentation of the orbit in both preoperative and postoperative images, which consists of a pseudo-label generation stage and a semi-supervised segmentation stage. A Paired Copy-Paste strategy is concurrently introduced to proficiently amalgamate features extracted from both preoperative and postoperative images, thereby augmenting the network discriminative capability in discerning changes within orbital boundaries. More specifically, we employ a random cropping technique to transfer regions from labeled preoperative images (foreground) onto unlabeled postoperative images (background), as well as unlabeled preoperative images (foreground) onto labeled postoperative images (background). It is imperative to acknowledge that every set of preoperative and postoperative images belongs to the identical patient. The semi-supervised segmentation network (stage 2) utilizes a combination of mixed supervisory signals from pseudo labels (stage 1) and ground truth to process the two mixed images. The training and testing of the proposed method have been conducted on the CT dataset obtained from the Eye Hospital of Wenzhou Medical University. The experimental results demonstrate that the proposed method achieves a mean Dice similarity coefficient (DSC) of 91.92% with only 5% labeled data, surpassing the performance of the current state-of-the-art method by 2.4%.


Asunto(s)
Hospitales , Órbita , Humanos , Órbita/diagnóstico por imagen , Órbita/cirugía , Tomografía Computarizada por Rayos X , Universidades , Procesamiento de Imagen Asistido por Computador
5.
Comput Biol Med ; 169: 107890, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38168646

RESUMEN

Feature matching of monocular laparoscopic videos is crucial for visualization enhancement in computer-assisted surgery, and the keys to conducting high-quality matches are accurate homography estimation, relative pose estimation, as well as sufficient matches and fast calculation. However, limited by various monocular laparoscopic imaging characteristics such as highlight noises, motion blur, texture interference and illumination variation, most exiting feature matching methods face the challenges of producing high-quality matches efficiently and sufficiently. To overcome these limitations, this paper presents a novel sequential coupling feature descriptor to extract and express multilevel feature maps efficiently, and a dual-correlate optimized coarse-fine strategy to establish dense matches in coarse level and adjust pixel-wise matches in fine level. Firstly, a novel sequential coupling swin transformer layer is designed in feature descriptor to learn and extract multilevel feature representations richly without increasing complexity. Then, a dual-correlate optimized coarse-fine strategy is proposed to match coarse feature sequences under low resolution, and the correlated fine feature sequences is optimized to refine pixel-wise matches based on coarse matching priors. Finally, the sequential coupling feature descriptor and dual-correlate optimization are merged into the Sequential Coupling Dual-Correlate Network (SeCo DC-Net) to produce high-quality matches. The evaluation is conducted on two public laparoscopic datasets: Scared and EndoSLAM, and the experimental results show the proposed network outperforms state-of-the-art methods in homography estimation, relative pose estimation, reprojection error, matching pairs number and inference runtime. The source code is publicly available at https://github.com/Iheckzza/FeatureMatching.


Asunto(s)
Laparoscopía , Cirugía Asistida por Computador , Aprendizaje , Programas Informáticos
6.
IEEE Trans Biomed Eng ; 71(2): 700-711, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38241137

RESUMEN

OBJECTIVE: Biliary interventional procedures require physicians to track the interventional instrument tip (Tip) precisely with X-ray image. However, Tip positioning relies heavily on the physicians' experience due to the limitations of X-ray imaging and the respiratory interference, which leads to biliary damage, prolonged operation time, and increased X-ray radiation. METHODS: We construct an augmented reality (AR) navigation system for biliary interventional procedures. It includes system calibration, respiratory motion correction and fusion navigation. Firstly, the magnetic and 3D computed tomography (CT) coordinates are aligned through system calibration. Secondly, a respiratory motion correction method based on manifold regularization is proposed to correct the misalignment of the two coordinates caused by respiratory motion. Thirdly, the virtual biliary, liver and Tip from CT are overlapped to the corresponding position of the patient for dynamic virtual-real fusion. RESULTS: Our system is respectively evaluated and achieved an average alignment error of 0.75 ± 0.17 mm and 2.79 ± 0.46 mm on phantoms and patients. The navigation experiments conducted on phantoms achieve an average Tip positioning error of 0.98 ± 0.15 mm and an average fusion error of 1.67 ± 0.34 mm after correction. CONCLUSION: Our system can automatically register the Tip to the corresponding location in CT, and dynamically overlap the 3D virtual model onto patients to provide accurate and intuitive AR navigation. SIGNIFICANCE: This study demonstrates the clinical potential of our system by assisting physicians during biliary interventional procedures. Our system enables dynamic visualization of virtual model on patients, reducing the reliance on contrast agents and X-ray usage.


Asunto(s)
Realidad Aumentada , Cirugía Asistida por Computador , Humanos , Imagenología Tridimensional , Hígado , Fantasmas de Imagen , Tomografía Computarizada por Rayos X/métodos , Cirugía Asistida por Computador/métodos
7.
Comput Biol Med ; 168: 107718, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-37988787

RESUMEN

Fractional flow reserve (FFR) is considered as the gold standard for diagnosing coronary myocardial ischemia. Existing 3D computational fluid dynamics (CFD) methods attempt to predict FFR noninvasively using coronary computed tomography angiography (CTA). However, the accuracy and efficiency of the 3D CFD methods in coronary arteries are considerably limited. In this work, we introduce a multi-dimensional CFD framework that improves the accuracy of FFR prediction by estimating 0D patient-specific boundary conditions, and increases the efficiency by generating 3D initial conditions. The multi-dimensional CFD models contain the 3D vascular model for coronary simulation, the 1D vascular model for iterative optimization, and the 0D vascular model for boundary conditions expression. To improve the accuracy, we utilize clinical parameters to derive 0D patient-specific boundary conditions with an optimization algorithm. To improve the efficiency, we evaluate the convergence state using the 1D vascular model and obtain the convergence parameters to generate appropriate 3D initial conditions. The 0D patient-specific boundary conditions and the 3D initial conditions are used to predict FFR (FFRC). We conducted a retrospective study involving 40 patients (61 diseased vessels) with invasive FFR and their corresponding CTA images. The results demonstrate that the FFRC and the invasive FFR have a strong linear correlation (r = 0.80, p < 0.001) and high consistency (mean difference: 0.014 ±0.071). After applying the cut-off value of FFR (0.8), the accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of FFRC were 88.5%, 93.3%, 83.9%, 84.8%, and 92.9%, respectively. Compared with the conventional zero initial conditions method, our method improves prediction efficiency by 71.3% per case. Therefore, our multi-dimensional CFD framework is capable of improving the accuracy and efficiency of FFR prediction significantly.


Asunto(s)
Enfermedad de la Arteria Coronaria , Estenosis Coronaria , Reserva del Flujo Fraccional Miocárdico , Isquemia Miocárdica , Humanos , Estudios Retrospectivos , Hidrodinámica , Angiografía Coronaria/métodos , Enfermedad de la Arteria Coronaria/diagnóstico por imagen , Valor Predictivo de las Pruebas , Vasos Coronarios/diagnóstico por imagen
8.
Comput Biol Med ; 169: 107850, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38145602

RESUMEN

BACKGROUND: Monocular depth estimation plays a fundamental role in clinical endoscopy surgery. However, the coherent illumination, smooth surfaces, and texture-less nature of endoscopy images present significant challenges to traditional depth estimation methods. Existing approaches struggle to accurately perceive depth in such settings. METHOD: To overcome these challenges, this paper proposes a novel multi-scale residual fusion method for estimating the depth of monocular endoscopy images. Specifically, we address the issue of coherent illumination by leveraging image frequency domain component space transformation, thereby enhancing the stability of the scene's light source. Moreover, we employ an image radiation intensity attenuation model to estimate the initial depth map. Finally, to refine the accuracy of depth estimation, we utilize a multi-scale residual fusion optimization technique. RESULTS: To evaluate the performance of our proposed method, extensive experiments were conducted on public datasets. The structural similarity measures for continuous frames in three distinct clinical data scenes reached impressive values of 0.94, 0.82, and 0.84, respectively. These results demonstrate the effectiveness of our approach in capturing the intricate details of endoscopy images. Furthermore, the depth estimation accuracy achieved remarkable levels of 89.3 % and 91.2 % for the two models' data, respectively, underscoring the robustness of our method. CONCLUSIONS: Overall, the promising results obtained on public datasets highlight the significant potential of our method for clinical applications, facilitating reliable depth estimation and enhancing the quality of endoscopy surgical procedures.


Asunto(s)
Endoscopía Gastrointestinal , Endoscopía
9.
Comput Biol Med ; 169: 107766, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38150885

RESUMEN

Automatic vessel segmentation is a critical area of research in medical image analysis, as it can greatly assist doctors in accurately and efficiently diagnosing vascular diseases. However, accurately extracting the complete vessel structure from images remains a challenge due to issues such as uneven contrast and background noise. Existing methods primarily focus on segmenting individual pixels and often fail to consider vessel features and morphology. As a result, these methods often produce fragmented results and misidentify vessel-like background noise, leading to missing and outlier points in the overall segmentation. To address these issues, this paper proposes a novel approach called the progressive edge information aggregation network for vessel segmentation (PEA-Net). The proposed method consists of several key components. First, a dual-stream receptive field encoder (DRE) is introduced to preserve fine structural features and mitigate false positive predictions caused by background noise. This is achieved by combining vessel morphological features obtained from different receptive field sizes. Second, a progressive complementary fusion (PCF) module is designed to enhance fine vessel detection and improve connectivity. This module complements the decoding path by combining features from previous iterations and the DRE, incorporating nonsalient information. Additionally, segmentation-edge decoupling enhancement (SDE) modules are employed as decoders to integrate upsampling features with nonsalient information provided by the PCF. This integration enhances both edge and segmentation information. The features in the skip connection and decoding path are iteratively updated to progressively aggregate fine structure information, thereby optimizing segmentation results and reducing topological disconnections. Experimental results on multiple datasets demonstrate that the proposed PEA-Net model and strategy achieve optimal performance in both pixel-level and topology-level metrics.


Asunto(s)
Benchmarking , Pisum sativum , Procesamiento de Imagen Asistido por Computador
10.
BMC Med Inform Decis Mak ; 23(1): 247, 2023 11 03.
Artículo en Inglés | MEDLINE | ID: mdl-37924054

RESUMEN

BACKGROUND: Clinical practice guidelines (CPGs) are designed to assist doctors in clinical decision making. High-quality research articles are important for the development of good CPGs. Commonly used manual screening processes are time-consuming and labor-intensive. Artificial intelligence (AI)-based techniques have been widely used to analyze unstructured data, including texts and images. Currently, there are no effective/efficient AI-based systems for screening literature. Therefore, developing an effective method for automatic literature screening can provide significant advantages. METHODS: Using advanced AI techniques, we propose the Paper title, Abstract, and Journal (PAJO) model, which treats article screening as a classification problem. For training, articles appearing in the current CPGs are treated as positive samples. The others are treated as negative samples. Then, the features of the texts (e.g., titles and abstracts) and journal characteristics are fully utilized by the PAJO model using the pretrained bidirectional-encoder-representations-from-transformers (BERT) model. The resulting text and journal encoders, along with the attention mechanism, are integrated in the PAJO model to complete the task. RESULTS: We collected 89,940 articles from PubMed to construct a dataset related to neck pain. Extensive experiments show that the PAJO model surpasses the state-of-the-art baseline by 1.91% (F1 score) and 2.25% (area under the receiver operating characteristic curve). Its prediction performance was also evaluated with respect to subject-matter experts, proving that PAJO can successfully screen high-quality articles. CONCLUSIONS: The PAJO model provides an effective solution for automatic literature screening. It can screen high-quality articles on neck pain and significantly improve the efficiency of CPG development. The methodology of PAJO can also be easily extended to other diseases for literature screening.


Asunto(s)
Aprendizaje Profundo , Guías de Práctica Clínica como Asunto , Humanos , Inteligencia Artificial , Toma de Decisiones Clínicas , Dolor de Cuello , Literatura de Revisión como Asunto
11.
Artículo en Inglés | MEDLINE | ID: mdl-37747865

RESUMEN

Microwave ablation (MWA) is a minimally invasive procedure for the treatment of liver tumor. Accumulating clinical evidence has considered the minimal ablative margin (MAM) as a significant predictor of local tumor progression (LTP). In clinical practice, MAM assessment is typically carried out through image registration of pre- and post-MWA images. However, this process faces two main challenges: non-homologous match between tumor and coagulation with inconsistent image appearance, and tissue shrinkage caused by thermal dehydration. These challenges result in low precision when using traditional registration methods for MAM assessment. In this paper, we present a local contractive nonrigid registration method using a biomechanical model (LC-BM) to address these challenges and precisely assess the MAM. The LC-BM contains two consecutive parts: (1) local contractive decomposition (LC-part), which reduces the incorrect match between the tumor and coagulation and quantifies the shrinkage in the external coagulation region, and (2) biomechanical model constraint (BM-part), which compensates for the shrinkage in the internal coagulation region. After quantifying and compensating for tissue shrinkage, the warped tumor is overlaid on the coagulation, and then the MAM is assessed. We evaluated the method using prospectively collected data from 36 patients with 47 liver tumors, comparing LC-BM with 11 state-of-the-art methods. LTP was diagnosed through contrast-enhanced MR follow-up images, serving as the ground truth for tumor recurrence. LC-BM achieved the highest accuracy (97.9%) in predicting LTP, outperforming other methods. Therefore, our proposed method holds significant potential to improve MAM assessment in MWA surgeries.

12.
Phys Med Biol ; 68(17)2023 08 22.
Artículo en Inglés | MEDLINE | ID: mdl-37549676

RESUMEN

Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Rayos X , Imagenología Tridimensional/métodos , Tomografía Computarizada por Rayos X/métodos , Radiografía , Procesamiento de Imagen Asistido por Computador
13.
IEEE J Biomed Health Inform ; 27(8): 3924-3935, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37027679

RESUMEN

Automatic segmentation of port-wine stains (PWS) from clinical images is critical for accurate diagnosis and objective assessment of PWS. However, this is a challenging task due to the color heterogeneity, low contrast, and indistinguishable appearance of PWS lesions. To address such challenges, we propose a novel multi-color space adaptive fusion network (M-CSAFN) for PWS segmentation. First, a multi-branch detection model is constructed based on six typical color spaces, which utilizes rich color texture information to highlight the difference between lesions and surrounding tissues. Second, an adaptive fusion strategy is used to fuse complementary predictions, which address the significant differences within the lesions caused by color heterogeneity. Third, a structural similarity loss with color information is proposed to measure the detail error between predicted lesions and truth lesions. Additionally, a PWS clinical dataset consisting of 1413 image pairs was established for the development and evaluation of PWS segmentation algorithms. To verify the effectiveness and superiority of the proposed method, we compared it with other state-of-the-art methods on our collected dataset and four publicly available skin lesion datasets (ISIC 2016, ISIC 2017, ISIC 2018, and PH2). The experimental results show that our method achieves remarkable performance in comparison with other state-of-the-art methods on our collected dataset, achieving 92.29% and 86.14% on Dice and Jaccard metrics, respectively. Comparative experiments on other datasets also confirmed the reliability and potential capability of M-CSAFN in skin lesion segmentation.


Asunto(s)
Mancha Vino de Oporto , Enfermedades de la Piel , Humanos , Mancha Vino de Oporto/patología , Reproducibilidad de los Resultados , Algoritmos , Dermoscopía/métodos , Procesamiento de Imagen Asistido por Computador
14.
Comput Biol Med ; 153: 106546, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36641935

RESUMEN

Accurate detection of coronary artery stenosis in X-ray angiography (XRA) images is crucial for the diagnosis and treatment of coronary artery disease. However, stenosis detection remains a challenging task due to complicated vascular structures, poor imaging quality, and fickle lesions. While devoted to accurate stenosis detection, most methods are inefficient in the exploitation of spatio-temporal information of XRA sequences, leading to a limited performance on the task. To overcome the problem, we propose a new stenosis detection framework based on a Transformer-based module to aggregate proposal-level spatio-temporal features. In the module, proposal-shifted spatio-temporal tokenization (PSSTT) scheme is devised to gather spatio-temporal region-of-interest (RoI) features for obtaining visual tokens within a local window. Then, the Transformer-based feature aggregation (TFA) network takes the tokens as the inputs to enhance the RoI features by learning the long-range spatio-temporal context for final stenosis prediction. The effectiveness of our method was validated by conducting qualitative and quantitative experiments on 233 XRA sequences of coronary artery. Our method achieves a high F1 score of 90.88%, outperforming other 15 state-of-the-art detection methods. It demonstrates that our method can perform accurate stenosis detection from XRA images due to the strong ability to aggregate spatio-temporal features.


Asunto(s)
Enfermedad de la Arteria Coronaria , Estenosis Coronaria , Humanos , Angiografía Coronaria/métodos , Constricción Patológica , Rayos X , Estenosis Coronaria/diagnóstico por imagen , Enfermedad de la Arteria Coronaria/diagnóstico
15.
Ultrasonics ; 128: 106862, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36240539

RESUMEN

The classic N-wire phantom has been widely used in the calibration of freehand ultrasound probes. One of the main challenges of the phantom is accurately identifying N-fiducials in ultrasound images, especially with multiple N-wire structures. In this study, a method using a multilayer N-wire phantom for the automatic spatial calibration of ultrasound images is proposed. All dots in the ultrasound image are segmented, scored, and classified according to the unique geometric features of the multilayer N-wire phantom. A recognition method for identifying N-fiducials from the dots is proposed for calibrating the spatial transformation of the ultrasound probe. At depths of 9, 11, 13, and 15 cm, the reconstruction error of 50 points is 1.24 ± 0.16, 1.09 ± 0.06, 0.95 ± 0.08, 1.02 ± 0.05 mm, respectively. The reconstruction mockup test shows that the distance accuracy is 1.11 ± 0.82 mm at a depth of 15 cm.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Calibración , Imagenología Tridimensional/métodos , Fantasmas de Imagen , Ultrasonografía/métodos
16.
Med Phys ; 50(1): 226-239, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35997999

RESUMEN

PURPOSE: Surface-based image-to-patient registration in current surgical navigation is mainly achieved by a 3D scanner, which has several limitations in clinical practice such as uncontrollable scanning range, complicated operation, and even high failure rate. An accurate, robust, and easy-to-perform image-to-patient registration method is urgently required. METHODS: An incremental point cloud registration method was proposed for surface-based image-to-patient registration. The point cloud in image space was extracted from the computed tomography (CT) image, and a template matching method was applied to remove the redundant points. The corresponding point cloud in patient space was incrementally collected by an optically tracked pointer, while the nearest point distance (NPD) constraint was applied to ensure the uniformity of the collected points. A coarse-to-fine registration method under the constraints of coverage ratio (CR) and outliers ratio (OR) was then proposed to obtain the optimal rigid transformation from image to patient space. The proposed method was integrated in the recently developed endoscopic navigation system, and phantom study and clinical trials were conducted to evaluate the performance of the proposed method. RESULTS: The results of the phantom study revealed that the proposed constraints greatly improved the accuracy and robustness of registration. The comparative experimental results revealed that the proposed registration method significantly outperform the scanner-based method, and achieved comparable accuracy to the fiducial-based method. In the clinical trials, the average registration duration was 1.24 ± 0.43 min, the target registration error (TRE) of 294 marker points (59 patients) was 1.25 ± 0.40 mm, and the lower 97.5% confidence limit of the success rate of positioning marker points exceeds the expected value (97.56% vs. 95.00%), revealed that the accuracy of the proposed method significantly met the clinical requirements (TRE ⩽ 2 mm, p < 0.05). CONCLUSIONS: The proposed method has both the advantages of high accuracy and convenience, which were absent in the scanner-based method and the fiducial-based method. Our findings will help improve the quality of endoscopic sinus and skull base surgery.


Asunto(s)
Marcadores Fiduciales , Cirugía Asistida por Computador , Humanos , Fantasmas de Imagen , Base del Cráneo/diagnóstico por imagen , Base del Cráneo/cirugía , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Ensayos Clínicos como Asunto
17.
Artículo en Inglés | MEDLINE | ID: mdl-35984790

RESUMEN

Automatic liver tumor segmentation plays a key role in radiation therapy of hepatocellular carcinoma. In this paper, we propose a novel densely connected U-Net model with criss-cross attention (CC-DenseUNet) to segment liver tumors in computed tomography (CT) images. The dense interconnections in CC-DenseUNet ensure the maximum information flow between encoder layers when extracting intra-slice features of liver tumors. Moreover, the criss-cross attention is used in CC-DenseUNet to efficiently capture only the necessary and meaningful non-local contextual information of CT images containing liver tumors. We evaluated the proposed CC-DenseUNet on the LiTS dataset and the 3DIRCADb dataset. Experimental results show that the proposed method reaches the state-of-the-art performance for liver tumor segmentation. We further experimentally demonstrate the robustness of the proposed method on a clinical dataset comprising 20 CT volumes.

18.
Comput Biol Med ; 148: 105826, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35810696

RESUMEN

BACKGROUND: Marker-based augmented reality (AR) calibration methods for surgical navigation often require a second computed tomography scan of the patient, and their clinical application is limited due to high manufacturing costs and low accuracy. METHODS: This work introduces a novel type of AR calibration framework that combines a Microsoft HoloLens device with a single camera registration module for surgical navigation. A camera is used to gather multi-view images of a patient for reconstruction in this framework. A shape feature matching-based search method is proposed to adjust the size of the reconstructed model. The double clustering-based 3D point cloud segmentation method and 3D line segment detection method are also proposed to extract the corner points of the image marker. The corner points are the registration data of the image marker. A feature triangulation iteration-based registration method is proposed to quickly and accurately calibrate the pose relationship between the image marker and the patient in the virtual and real space. The patient model after registration is wirelessly transmitted to the HoloLens device to display the AR scene. RESULTS: The proposed approach was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with six advanced AR calibration methods. The proposed method obtained average fusion errors of 0.70 ± 0.16 and 0.91 ± 0.13 mm in phantom and volunteer experiments, respectively. The fusion accuracy of the proposed method is the highest among all comparison methods. A volunteer liver puncture clinical simulation experiment was also conducted to show the clinical feasibility. CONCLUSIONS: Our experiments proved the effectiveness of the proposed AR calibration method, and revealed a considerable potential for improving surgical performance.


Asunto(s)
Realidad Aumentada , Cirugía Asistida por Computador , Calibración , Humanos , Imagenología Tridimensional , Fantasmas de Imagen
19.
J Cancer Res Ther ; 18(2): 509-515, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35645122

RESUMEN

Objectives: To ascertain the clinical outcomes of patients aged ≥65 years with clinical staging T1 (cT1) renal cell carcinoma (RCC) treated with percutaneous microwave ablation (MWA) under ultrasound control compared with those aged <65. Materials and methods: From September 2009 to December 2016, clinical data of two groups, Group O (≥ 65 years) consisting of 75 patients (76 RCCs) and Group Y (< 65 years) consisting of 91 patients (99 RCCs), who underwent MWA treatment for RCC with comparable mean diameters at baseline, were retrospectively evaluated. The methodological effectiveness, cumulative overall survival (OS) and disease-free survival (DFS), local tumor progression (LTP), major and minor complications, and renal performance, including serum creatinine (Cr) and blood urea nitrogen (BUN) between the two categories, were statistically assessed by SPSS. Results: After excision, there were no significant differences between the two groups concerning technical efficacy, LTP, and major and minor complications. The cumulative OS and DFS rates at 1, 3, and 5 years in Group O versus Group Y were 100%, 92.6%, and 92.6% versus 98.6%, 96.9%, and 90.9% (P = 0.701), and 100%, 92.5%, and 92.5% versus 98.6%, 96.9%, and 90.4% (P = 0.697), respectively. There was no significant variance between serum Cr and BUN between the two groups before MWA and at the last follow-up. Conclusion: Due to the corresponding clinical outcomes for the treatment of cT1 RCCs in patients aged <65 years and ≥65 years, the US-guided MWA is a safe and effective method and may be suggested as one of the first-line nonsurgical options for identified older patients.


Asunto(s)
Carcinoma de Células Renales , Neoplasias Renales , Ablación por Radiofrecuencia , Carcinoma de Células Renales/cirugía , Humanos , Neoplasias Renales/cirugía , Microondas/uso terapéutico , Estudios Retrospectivos
20.
Biomed Opt Express ; 13(4): 2247-2265, 2022 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-35519251

RESUMEN

Feature matching is an important technology to obtain the surface morphology of soft tissues in intraoperative endoscopy images. The extraction of features from clinical endoscopy images is a difficult problem, especially for texture-less images. The reduction of surface details makes the problem more challenging. We proposed an adaptive gradient-preserving method to improve the visual feature of texture-less images. For feature matching, we first constructed a spatial motion field by using the superpixel blocks and estimated its information entropy matching with the motion consistency algorithm to obtain the initial outlier feature screening. Second, we extended the superpixel spatial motion field to the vector field and constrained it with the vector feature to optimize the confidence of the initial matching set. Evaluations were implemented on public and undisclosed datasets. Our method increased by an order of magnitude in the three feature point extraction methods than the original image. In the public dataset, the accuracy and F1-score increased to 92.6% and 91.5%. The matching score was improved by 1.92%. In the undisclosed dataset, the reconstructed surface integrity of the proposed method was improved from 30% to 85%. Furthermore, we also presented the surface reconstruction result of differently sized images to validate the robustness of our method, which showed high-quality feature matching results. Overall, the experiment results proved the effectiveness of the proposed matching method. This demonstrates its capability to extract sufficient visual feature points and generate reliable feature matches for 3D reconstruction and meaningful applications in clinical.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...