Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Med Phys ; 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39137294

RESUMEN

BACKGROUND: The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE: This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS: The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS: Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS: This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.

2.
Imaging Neurosci (Camb) ; 2: 1-33, 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-39015335

RESUMEN

Affine image registration is a cornerstone of medical-image analysis. While classical algorithms can achieve excellent accuracy, they solve a time-consuming optimization for every image pair. Deep-learning (DL) methods learn a function that maps an image pair to an output transform. Evaluating the function is fast, but capturing large transforms can be challenging, and networks tend to struggle if a test-image characteristic shifts from the training domain, such as the resolution. Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image. We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image without preprocessing. First, we leverage a strategy that trains networks with widely varying images synthesized from label maps, yielding robust performance across acquisition specifics unseen at training. Second, we optimize the spatial overlap of select anatomical labels. This enables networks to distinguish anatomy of interest from irrelevant structures, removing the need for preprocessing that excludes content which would impinge on anatomy-specific registration. Third, we combine the affine model with a deformable hypernetwork that lets users choose the optimal deformation-field regularity for their specific data, at registration time, in a fraction of the time required by classical methods. This framework is applicable to learning anatomy-aware, acquisition-agnostic registration of any anatomy with any architecture, as long as label maps are available for training. We analyze how competing architectures learn affine transforms and compare state-of-the-art registration tools across an extremely diverse set of neuroimaging data, aiming to truly capture the behavior of methods in the real world. SynthMorph demonstrates high accuracy and is available at https://w3id.org/synthmorph, as a single complete end-to-end solution for registration of brain magnetic resonance imaging (MRI) data.

3.
Quant Imaging Med Surg ; 14(7): 4779-4791, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-39022247

RESUMEN

Background: The evaluation of brain tumor recurrence after surgery is based on the comparison between tumor regions on pre-operative and follow-up magnetic resonance imaging (MRI) scans in clinical practice. Accurate alignment of MRI scans is important in this evaluation process. However, existing methods often fail to yield accurate alignment due to substantial appearance and shape changes of tumor regions. The study aimed to improve this misalignment situation through multimodal information and compensation for shape changes. Methods: In this work, a deep learning-based deformation registration method using bilateral pyramid to create multi-scale image features was developed. Moreover, morphology operations were employed to build correspondence between the surgical resection on the follow-up and pre-operative MRI scans. Results: Compared with baseline methods, the proposed method achieved the lowest mean absolute error of 1.82 mm on the public BraTS-Reg 2022 dataset. Conclusions: The results suggest that the proposed method is potentially useful for evaluating tumor recurrence after surgery. We effectively verified its ability to extract and integrate the information of the second modality, and also revealed the micro representation of tumor recurrence. This study can assist doctors in registering multiple sequence images of patients, observing lesions and surrounding areas, analyzing and processing them, and guiding doctors in their treatment plans.

4.
Biomed Phys Eng Express ; 10(5)2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39084234

RESUMEN

Objective. Existing registration networks based on cross-attention design usually divide the image pairs to be registered into patches for input. The division and merging operations of a series of patches are difficult to maintain the topology of the deformation field and reduce the interpretability of the network. Therefore, our goal is to develop a new network architecture based on a cross-attention mechanism combined with a multi-resolution strategy to improve the accuracy and interpretability of medical image registration.Approach. We propose a new deformable image registration network NCNet based on neighborhood cross-attention combined with multi-resolution strategy. The network structure mainly consists of a multi-resolution feature encoder, a multi-head neighborhood cross-attention module and a registration decoder. The hierarchical feature extraction capability of our encoder is improved by introducing large kernel parallel convolution blocks; the cross-attention module based on neighborhood calculation is used to reduce the impact on the topology of the deformation field and double normalization is used to reduce its computational complexity.Main result. We performed atlas-based registration and inter-subject registration tasks on the public 3D brain magnetic resonance imaging datasets LPBA40 and IXI respectively. Compared with the popular VoxelMorph method, our method improves the average DSC value by 7.9% and 3.6% on LPBA40 and IXI. Compared with the popular TransMorph method, our method improves the average DSC value by 4.9% and 1.3% on LPBA40 and IXI.Significance. We proved the advantages of the neighborhood attention calculation method compared to the window attention calculation method based on partitioning patches, and analyzed the impact of the pyramid feature encoder and double normalization on network performance. This has made a valuable contribution to promoting the further development of medical image registration methods.


Asunto(s)
Algoritmos , Encéfalo , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Redes Neurales de la Computación , Bases de Datos Factuales
5.
Comput Med Imaging Graph ; 116: 102418, 2024 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-39079410

RESUMEN

Shape registration of patient-specific organ shapes to endoscopic camera images is expected to be a key to realizing image-guided surgery, and a variety of applications of machine learning methods have been considered. Because the number of training data available from clinical cases is limited, the use of synthetic images generated from a statistical deformation model has been attempted; however, the influence on estimation caused by the difference between synthetic images and real scenes is a problem. In this study, we propose a self-supervised offline learning framework for model-based registration using image features commonly obtained from synthetic images and real camera images. Because of the limited number of endoscopic images available for training, we use a synthetic image generated from the nonlinear deformation model that represents possible intraoperative pneumothorax deformations. In order to solve the difficulty in estimating deformed shapes and viewpoints from the common image features obtained from synthetic and real images, we attempted to improve the registration error by adding the shading and distance information that can be obtained as prior knowledge in the synthetic image. Shape registration with real camera images is performed by learning the task of predicting the differential model parameters between two synthetic images. The developed framework achieved registration accuracy with a mean absolute error of less than 10 mm and a mean distance of less than 5 mm in a thoracoscopic pulmonary cancer resection, confirming improved prediction accuracy compared with conventional methods.

6.
Neural Netw ; 178: 106426, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38878640

RESUMEN

Multi-phase dynamic contrast-enhanced magnetic resonance imaging image registration makes a substantial contribution to medical image analysis. However, existing methods (e.g., VoxelMorph, CycleMorph) often encounter the problem of image information misalignment in deformable registration tasks, posing challenges to the practical application. To address this issue, we propose a novel smooth image sampling method to align full organic information to realize detail-preserving image warping. In this paper, we clarify that the phenomenon about image information mismatch is attributed to imbalanced sampling. Then, a sampling frequency map constructed by sampling frequency estimators is utilized to instruct smooth sampling by reducing the spatial gradient and discrepancy between all-ones matrix and sampling frequency map. In addition, our estimator determines the sampling frequency of a grid voxel in the moving image by aggregating the sum of interpolation weights from warped non-grid sampling points in its vicinity and vectorially constructs sampling frequency map through projection and scatteration. We evaluate the effectiveness of our approach through experiments on two in-house datasets. The results showcase that our method preserves nearly complete details with ideal registration accuracy compared with several state-of-the-art registration methods. Additionally, our method exhibits a statistically significant difference in the regularity of the registration field compared to other methods, at a significance level of p < 0.05. Our code will be released at https://github.com/QingRui-Sha/SFM.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen
7.
Phys Med Biol ; 69(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38697195

RESUMEN

Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Aprendizaje Automático
8.
Med Phys ; 2024 May 21.
Artículo en Inglés | MEDLINE | ID: mdl-38772037

RESUMEN

BACKGROUND: Deformable registration is required to generate a time-integrated activity (TIA) map which is essential for voxel-based dosimetry. The conventional iterative registration algorithm using anatomical images (e.g., computed tomography (CT)) could result in registration errors in functional images (e.g., single photon emission computed tomography (SPECT) or positron emission tomography (PET)). Various deep learning-based registration tools have been proposed, but studies specifically focused on the registration of serial hybrid images were not found. PURPOSE: In this study, we introduce CoRX-NET, a novel unsupervised deep learning network designed for deformable registration of hybrid medical images. The CoRX-NET structure is based on the Swin-transformer (ST), allowing for the representation of complex spatial connections in images. Its self-attention mechanism aids in the effective exchange and integration of information across diverse image regions. To augment the amalgamation of SPECT and CT features, cross-stitch layers have been integrated into the network. METHODS: Two different 177 Lu DOTATATE SPECT/CT datasets were acquired at different medical centers. 22 sets from Seoul National University and 14 sets from Sunway Medical Centre are used for training/internal validation and external validation respectively. The CoRX-NET architecture builds upon the ST, enabling the modeling of intricate spatial relationships within images. To further enhance the fusion of SPECT and CT features, cross-stitch layers have been incorporated within the network. The network takes a pair of SPECT/CT images (e.g., fixed and moving images) and generates a deformed SPECT/CT image. The performance of the network was compared with Elastix and TransMorph using L1 loss and structural similarity index measure (SSIM) of CT, SSIM of normalized SPECT, and local normalized cross correlation (LNCC) of SPECT as metrics. The voxel-wise root mean square errors (RMSE) of TIA were compared among the different methods. RESULTS: The ablation study revealed that cross-stitch layers improved SPECT/CT registration performance. The cross-stitch layers notably enhance SSIM (internal validation: 0.9614 vs. 0.9653, external validation: 0.9159 vs. 0.9189) and LNCC of normalized SPECT images (internal validation: 0.7512 vs. 0.7670, external validation: 0.8027 vs. 0.8027). CoRX-NET with the cross-stitch layer achieved superior performance metrics compared to Elastix and TransMorph, except for CT SSIM in the external dataset. When qualitatively analyzed for both internal and external validation cases, CoRX-NET consistently demonstrated superior SPECT registration results. In addition, CoRX-NET accomplished SPECT/CT image registration in less than 6 s, whereas Elastix required approximately 50 s using the same PC's CPU. When employing CoRX-NET, it was observed that the voxel-wise RMSE values for TIA were approximately 27% lower for the kidney and 33% lower for the tumor, compared to when Elastix was used. CONCLUSION: This study represents a major advancement in achieving precise SPECT/CT registration using an unsupervised deep learning network. It outperforms conventional methods like Elastix and TransMorph, reducing uncertainties in TIA maps for more accurate dose assessments.

9.
Comput Med Imaging Graph ; 115: 102397, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38735104

RESUMEN

We address the problem of lung CT image registration, which underpins various diagnoses and treatments for lung diseases. The main crux of the problem is the large deformation that the lungs undergo during respiration. This physiological process imposes several challenges from a learning point of view. In this paper, we propose a novel training scheme, called stochastic decomposition, which enables deep networks to effectively learn such a difficult deformation field during lung CT image registration. The key idea is to stochastically decompose the deformation field, and supervise the registration by synthetic data that have the corresponding appearance discrepancy. The stochastic decomposition allows for revealing all possible decompositions of the deformation field. At the learning level, these decompositions can be seen as a prior to reduce the ill-posedness of the registration yielding to boost the performance. We demonstrate the effectiveness of our framework on Lung CT data. We show, through extensive numerical and visual results, that our technique outperforms existing methods.


Asunto(s)
Procesos Estocásticos , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Humanos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Pulmón/diagnóstico por imagen , Algoritmos , Enfermedades Pulmonares/diagnóstico por imagen , Enfermedades Pulmonares/fisiopatología
10.
Biomed Eng Lett ; 14(3): 497-509, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38645595

RESUMEN

In recent years, deep learning has ushered in significant development in medical image registration, and the method of non-rigid registration using deep neural networks to generate a deformation field has higher accuracy. However, unlike monomodal medical image registration, multimodal medical image registration is a more complex and challenging task. This paper proposes a new linear-to-nonlinear framework (L2NLF) for multimodal medical image registration. The first linear stage is essentially image conversion, which can reduce the difference between two images without changing the authenticity of medical images, thus transforming multimodal registration into monomodal registration. The second nonlinear stage is essentially unsupervised deformable registration based on the deep neural network. In this paper, a brand-new registration network, CrossMorph, is designed, a deep neural network similar to the U-net structure. As the backbone of the encoder, the volume CrossFormer block can better extract local and global information. Booster promotes the reduction of more deep features and shallow features. The qualitative and quantitative experimental results on T1 and T2 data of 240 patients' brains show that L2NLF can achieve excellent registration effect in the image conversion part with very low computation, and it will not change the authenticity of the converted image at all. Compared with the current state-of-the-art registration method, CrossMorph can effectively reduce average surface distance, improve dice score, and improve the deformation field's smoothness. The proposed methods have potential value in clinical application.

11.
Med Phys ; 51(7): 4811-4826, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38353628

RESUMEN

BACKGROUND: Image registration is a challenging problem in many clinical tasks, but deep learning has made significant progress in this area over the past few years. Real-time and robust registration has been made possible by supervised transformation estimation. However, the quality of registrations using this framework depends on the quality of ground truth labels such as displacement field. PURPOSE: To propose a simple and reliable method for registering medical images based on image structure similarity in a completely unsupervised manner. METHODS: We proposed a deep cascade unsupervised deformable registration approach to align images without reliable clinical data labels. Our basic network was composed of a displacement estimation module (ResUnet) and a deformation module (spatial transformer layers). We adopted l 2 $l_2$ -norm to regularize the deformation field instead of the traditional l 1 $l_1$ -norm regularization. Additionally, we utilized structural similarity (ssim) estimation during the training stage to enhance the structural consistency between the deformed images and the reference images. RESULTS: Experiments results indicated that by incorporating ssim loss, our cascaded methods not only achieved higher dice score of 0.9873, ssim score of 0.9559, normalized cross-correlation (NCC) score of 0.9950, and lower relative sum of squared difference (SSD) error of 0.0313 on CT images, but also outperformed the comparative methods on ultrasound dataset. The statistical t $t$ -test results also proved that these improvements of our method have statistical significance. CONCLUSIONS: In this study, the promising results based on diverse evaluation metrics have demonstrated that our model is simple and effective in deformable image registration (DIR). The generalization ability of the model was also verified through experiments on liver CT images and cardiac ultrasound images.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático no Supervisado , Procesamiento de Imagen Asistido por Computador/métodos , Humanos , Aprendizaje Profundo , Tomografía Computarizada por Rayos X
12.
Phys Eng Sci Med ; 47(2): 589-596, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38372942

RESUMEN

To investigate the impact of sagging correction calibration errors in radiotherapy software on image matching. Three software applications were used, with and without a polymethyl methacrylate rod supporting the ball bearings (BB). The calibration error for sagging correction across nine flex maps (FMs) was determined by shifting the BB positions along the Left-Right (LR), Gun-Target (GT), and Up-Down (UD) directions from the reference point. Lucy and pelvic phantom cone-beam computed tomography (CBCT) images underwent auto-matching after modifying each FM. Image deformation was assessed in orthogonal CBCT planes, and the correlations among BB shift magnitude, deformation vector value, and differences in auto-matching were analyzed. The average difference in analysis results among the three softwares for the Winston-Lutz test was within 0.1 mm. The determination coefficients (R2) between the BB shift amount and Lucy phantom matching error in each FM were 0.99, 0.99, and 1.00 in the LR-, GT-, and UD-directions, respectively. The pelvis phantom demonstrated no cross-correlation in the GT direction during auto-matching error evaluation using each FM. The correlation coefficient (r) between the BB shift and the deformation vector value was 0.95 on average for all image planes. Slight differences were observed among software in the evaluation of the Winston-Lutz test. The sagging correction calibration error in the radiotherapy imaging system was caused by an auto-matching error of the phantom and deformation of CBCT images.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Fantasmas de Imagen , Programas Informáticos , Calibración , Humanos , Pelvis/diagnóstico por imagen
13.
Med Biol Eng Comput ; 62(6): 1795-1808, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38381202

RESUMEN

Image registration is a primary task in various medical image analysis applications. However, cardiac image registration is difficult due to the large non-rigid deformation of the heart and the complex anatomical structure. This paper proposes a structure-aware independently trained multi-scale registration network (SIMReg) to address this challenge. Using image pairs of different resolutions, independently train each registration network to extract image features of large deformation image pairs at different resolutions. In the testing stage, the large deformation registration is decomposed into a multi-scale registration process, and the deformation fields of different resolutions are fused by a step-by-step deformation method, thus solving the difficulty of directly processing large deformation. Meanwhile, the targeted introduction of MIND (modality independent neighborhood descriptor) structural features to guide network training enhances the registration of cardiac structural contours and improves the registration effect of local details. Experiments were carried out on the open cardiac dataset ACDC (automated cardiac diagnosis challenge), and the average Dice value of the experimental results of the proposed method was 0.833. Comparative experiments showed that the proposed SIMReg could better solve the problem of heart image registration and achieve a better registration effect on cardiac images.


Asunto(s)
Corazón , Procesamiento de Imagen Asistido por Computador , Humanos , Corazón/diagnóstico por imagen , Corazón/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Bases de Datos Factuales
14.
Med Image Anal ; 91: 103035, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37992496

RESUMEN

We introduce CartiMorph, a framework for automated knee articular cartilage morphometrics. It takes an image as input and generates quantitative metrics for cartilage subregions, including the percentage of full-thickness cartilage loss (FCL), mean thickness, surface area, and volume. CartiMorph leverages the power of deep learning models for hierarchical image feature representation. Deep learning models were trained and validated for tissue segmentation, template construction, and template-to-image registration. We established methods for surface-normal-based cartilage thickness mapping, FCL estimation, and rule-based cartilage parcellation. Our cartilage thickness map showed less error in thin and peripheral regions. We evaluated the effectiveness of the adopted segmentation model by comparing the quantitative metrics obtained from model segmentation and those from manual segmentation. The root-mean-squared deviation of the FCL measurements was less than 8%, and strong correlations were observed for the mean thickness (Pearson's correlation coefficient ρ∈[0.82,0.97]), surface area (ρ∈[0.82,0.98]) and volume (ρ∈[0.89,0.98]) measurements. We compared our FCL measurements with those from a previous study and found that our measurements deviated less from the ground truths. We observed superior performance of the proposed rule-based cartilage parcellation method compared with the atlas-based approach. CartiMorph has the potential to promote imaging biomarkers discovery for knee osteoarthritis.


Asunto(s)
Cartílago Articular , Osteoartritis de la Rodilla , Humanos , Cartílago Articular/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Articulación de la Rodilla/diagnóstico por imagen , Osteoartritis de la Rodilla/diagnóstico por imagen
15.
Med Image Anal ; 91: 103038, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38000258

RESUMEN

Deformable image registration, the estimation of the spatial transformation between different images, is an important task in medical imaging. Deep learning techniques have been shown to perform 3D image registration efficiently. However, current registration strategies often only focus on the deformation smoothness, which leads to the ignorance of complicated motion patterns (e.g., separate or sliding motions), especially for the intersection of organs. Thus, the performance when dealing with the discontinuous motions of multiple nearby objects is limited, causing undesired predictive outcomes in clinical usage, such as misidentification and mislocalization of lesions or other abnormalities. Consequently, we proposed a novel registration method to address this issue: a new Motion Separable backbone is exploited to capture the separate motion, with a theoretical analysis of the upper bound of the motions' discontinuity provided. In addition, a novel Residual Aligner module was used to disentangle and refine the predicted motions across the multiple neighboring objects/organs. We evaluate our method, Residual Aligner-based Network (RAN), on abdominal Computed Tomography (CT) scans and it has shown to achieve one of the most accurate unsupervised inter-subject registration for the 9 organs, with the highest-ranked registration of the veins (Dice Similarity Coefficient (%)/Average surface distance (mm): 62%/4.9mm for the vena cava and 34%/7.9mm for the portal and splenic vein), with a smaller model structure and less computation compared to state-of-the-art methods. Furthermore, when applied to lung CT, the RAN achieves comparable results to the best-ranked networks (94%/3.0mm), also with fewer parameters and less computation.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Movimiento (Física) , Pulmón/diagnóstico por imagen , Imagenología Tridimensional , Procesamiento de Imagen Asistido por Computador/métodos
16.
Front Digit Health ; 5: 1283726, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38144260

RESUMEN

This paper compares three finite element-based methods used in a physics-based non-rigid registration approach and reports on the progress made over the last 15 years. Large brain shifts caused by brain tumor removal affect registration accuracy by creating point and element outliers. A combination of approximation- and geometry-based point and element outlier rejection improves the rigid registration error by 2.5 mm and meets the real-time constraints (4 min). In addition, the paper raises several questions and presents two open problems for the robust estimation and improvement of registration error in the presence of outliers due to sparse, noisy, and incomplete data. It concludes with preliminary results on leveraging Quantum Computing, a promising new technology for computationally intensive problems like Feature Detection and Block Matching in addition to finite element solver; all three account for 75% of computing time in deformable registration.

17.
Artículo en Inglés | MEDLINE | ID: mdl-38501056

RESUMEN

Magnetic resonance imaging (MRI) has gained popularity in the field of prenatal imaging due to the ability to provide high quality images of soft tissue. In this paper, we presented a novel method for extracting different textural and morphological features of the placenta from MRI volumes using topographical mapping. We proposed polar and planar topographical mapping methods to produce common placental features from a unique point of observation. The features extracted from the images included the entire placenta surface, as well as the thickness, intensity, and entropy maps displayed in a convenient two-dimensional format. The topography-based images may be useful for clinical placental assessments as well as computer-assisted diagnosis, and prediction of potential pregnancy complications.

18.
Proc IEEE Int Symp Biomed Imaging ; 2023: 899-903, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38213549

RESUMEN

We introduce a strategy for learning image registration without acquired imaging data, producing powerful networks agnostic to magnetic resonance imaging (MRI) contrast. While classical methods accurately estimate the spatial correspondence between images, they solve an optimization problem for every new image pair. Learning methods are fast at test time but limited to images with contrasts and geometric content similar to those seen during training. We propose to remove this dependency using a generative strategy that exposes networks to a wide range of images synthesized from segmentations during training, forcing them to generalize across contrasts. We show that networks trained within this framework generalize to a broad array of unseen MRI contrasts and surpass classical state-of-the-art brain registration accuracy by up to 12.4 Dice points for a variety of tested contrast combinations. Critically, training on arbitrary shapes synthesized from noise distributions results in competitive performance, removing the dependency on acquired data of any kind. Additionally, since anatomical label maps are often available for the anatomy of interest, we show that synthesizing images from these dramatically boosts performance, while still avoiding the need for real intensity images during training.

19.
Artículo en Chino | WPRIM (Pacífico Occidental) | ID: wpr-608323

RESUMEN

Objective To investigate the feasibility of defining the radiotherapy target of primary liver cancer using four-dimensional computed tomography (4DCT) and T2-weighted magnetic resonance (MR-T2) deformable image registration.Methods Ten patients with hepatocellular carcinoma (HCC) who first received radiotherapy were included in this study.The 4DCT in free breathing and MR-T2 in deep breathing were acquired sequentially.4DCT were sorted into ten series of CT images according to the respiratory phase.MIM software was used for deformable image registration.The accuracy of deformable image registration was assessed by the maximal displacements in three-dimensional directions of the portal vein and the celiac trunk and the degree of liver overlapping (P-LIVER).Gross tumor volume (GTV) was delineated on different series of CT images and the internal GTV (IGTV) was merged by ten GTVs on 4DCT images in each phase.The MR-T2 image was deformably registered to 4DCT images in each phase to acquire ten GTVDR.The IGTVDRwas obtained by merging the ten GTVDR.The differences between different target volumes were compared by paired t-test.Results The maximal displacements in three-dimensional directions of the portal vein were 0.3±0.8 mm along the x-axis, 0.8±1.8 mm along the y-axis, and 0.5±1.5 mm along the z-axis.The maximal displacements in three-dimensional directions of the celiac trunk were 0.1±1.0 mm along the x-axis, 0.7±1.2 mm along the y-axis, and 0.6±2.0 mm along the z-axis.Overlapping degree was 115.4±13.8%.The volumes of GTVs obtained from 4DCT images in each phase after DR increased by an average of 8.18%(P<0.05), and were consistent with those delineated on MR-T2 images.The IGTV after DR increased by an average of 9.67%(P<0.05).Conclusions MRI image can show more information and have a higher contrast than CT image.MRI images should be combined with 4DCT images for delineating the GTV.It can better determine the scope and trajectory of the target and improve the delineation accuracy of HCC target.

20.
Biomedical Engineering Letters ; (4): 173-181, 2017.
Artículo en Inglés | WPRIM (Pacífico Occidental) | ID: wpr-656480

RESUMEN

In this paper, we extend our previous work on deformable image registration to inhomogenous tissues. Inhomogenous tissues include the tissues with embedded tumors, which is common in clinical applications. It is a very challenging task since the registration method that works for homogenous tissues may not work well with inhomogenous tissues. The maximum error normally occurs in the regions with tumors and often exceeds the acceptable error threshold. In this paper, we propose a new error correction method with adaptive weighting to reduce the maximum registration error. Our previous fast deformable registration method is used in the inner loop. We have also proposed a new evaluation metric average error of deformation field (AEDF) to evaluate the registration accuracy in regions between vessels and bifurcation points. We have validated the proposed method using liver MR images from human subjects. AEDF results show that the proposed method can greatly reduce the maximum registration errors when compared with the previous method with no adaptive weighting. The proposed method has the potential to be used in clinical applications to reduce registration errors in regions with tumors.


Asunto(s)
Humanos , Hígado , Métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA