Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
IEEE Trans Med Imaging ; PP2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38976465

RESUMO

Multi-modal medical images provide complementary soft-tissue characteristics that aid in the screening and diagnosis of diseases. However, limited scanning time, image corruption and various imaging protocols often result in incomplete multi-modal images, thus limiting the usage of multi-modal data for clinical purposes. To address this issue, in this paper, we propose a novel unified multi-modal image synthesis method for missing modality imputation. Our method overall takes a generative adversarial architecture, which aims to synthesize missing modalities from any combination of available ones with a single model. To this end, we specifically design a Commonality- and Discrepancy-Sensitive Encoder for the generator to exploit both modality-invariant and specific information contained in input modalities. The incorporation of both types of information facilitates the generation of images with consistent anatomy and realistic details of the desired distribution. Besides, we propose a Dynamic Feature Unification Module to integrate information from a varying number of available modalities, which enables the network to be robust to random missing modalities. The module performs both hard integration and soft integration, ensuring the effectiveness of feature combination while avoiding information loss. Verified on two public multi-modal magnetic resonance datasets, the proposed method is effective in handling various synthesis tasks and shows superior performance compared to previous methods.

2.
Comput Biol Med ; 166: 107467, 2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37725849

RESUMO

Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.

3.
IEEE Trans Med Imaging ; 42(10): 3091-3103, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37171932

RESUMO

Multi-modal tumor segmentation exploits complementary information from different modalities to help recognize tumor regions. Known multi-modal segmentation methods mainly have deficiencies in two aspects: First, the adopted multi-modal fusion strategies are built upon well-aligned input images, which are vulnerable to spatial misalignment between modalities (caused by respiratory motions, different scanning parameters, registration errors, etc). Second, the performance of known methods remains subject to the uncertainty of segmentation, which is particularly acute in tumor boundary regions. To tackle these issues, in this paper, we propose a novel multi-modal tumor segmentation method with deformable feature fusion and uncertain region refinement. Concretely, we introduce a deformable aggregation module, which integrates feature alignment and feature aggregation in an ensemble, to reduce inter-modality misalignment and make full use of cross-modal information. Moreover, we devise an uncertain region inpainting module to refine uncertain pixels using neighboring discriminative features. Experiments on two clinical multi-modal tumor datasets demonstrate that our method achieves promising tumor segmentation results and outperforms state-of-the-art methods.


Assuntos
Neoplasias , Humanos , Incerteza , Neoplasias/diagnóstico por imagem , Movimento (Física) , Taxa Respiratória
4.
IEEE J Biomed Health Inform ; 27(1): 145-156, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35353708

RESUMO

Skin lesion segmentation is a fundamental procedure in computer-aided melanoma diagnosis. However, due to the diverse shape, variable size, blurry boundary, and noise interference of lesion regions, existing methods may struggle with the challenge of inconsistency within classes and indiscrimination between classes. In view of this, we propose a novel method to learn and model inter-pixel correlations from both global and local aspects, which can increase inter-class variances and intra-class similarities. Specifically, under the encoder-decoder architecture, we first design a pyramid transformer inter-pixel correlations (PTIC) module, aiming at capturing the non-local context information of different levels and further exploring the global pixel-level relationship to deal with the large variance of shape and size. Further, we devise a local neighborhood metric learning (LNML) module to strengthen the local semantic correlations learning capability and increase the separability between classes in the feature space. These two modules can complementarily strengthen the feature representation capability via exploiting the inter-pixel semantic correlations, thus further improving intra-class consistency and inter-class variance. Comprehensive experiments are performed on public skin lesion segmentation datasets: ISIC 2018, ISIC2016, and PH2, and experimental results demonstrate that the proposed method achieves better segmentation performance than other state-of-the-art methods.


Assuntos
Melanoma , Dermatopatias , Humanos , Diagnóstico por Computador , Fontes de Energia Elétrica , Semântica , Processamento de Imagem Assistida por Computador
5.
Comput Biol Med ; 147: 105685, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35780602

RESUMO

Breast tumor segmentation plays a critical role in the diagnosis and treatment of breast diseases. Current breast tumor segmentation methods are mainly deep learning (DL) based methods, which exacted the contrast information between tumors and backgrounds, and produced tumor candidates. However, all these methods were developed based on traditional standard convolutions, which may not be able to model various tumor shapes and extract pure information of tumors (the extracted information usually contain non-tumor information). Besides, the loss functions used in these methods mainly aimed to minimize the intra-class distances, while ignoring the influence of inter-class distances upon segmentation. In this paper, we propose a novel lesion morphology aware network to segment breast tumors in 2D magnetic resonance images (MRI). The proposed network employs a hierarchical structure that contains two stages: breast segmentation stage and tumor segmentation stage. In the tumor segmentation stage, we devise a tumor morphology aware network to incorporate pure tumor characteristics, which facilitates contrastive information extraction. Further, we propose a hybrid intra- and inter-class distance optimization loss to supervise the network, which can minimize intra-class distances meanwhile maximizing inter-class distances, hence reducing the potential false positive/negative pixels in segmentation results. Verified on a clinical 2D MRI breast tumor dataset, our proposed method achieves eminent segmentation results and outperforms state-of-the-art methods, implying that the proposed method has a good prospect for clinical use.


Assuntos
Neoplasias da Mama , Processamento de Imagem Assistida por Computador , Mama , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética
6.
Comput Med Imaging Graph ; 95: 102021, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34861622

RESUMO

Breast tumor segmentation is critical to the diagnosis and treatment of breast cancer. In clinical breast cancer analysis, experts often examine multi-modal images since such images provide abundant complementary information on tumor morphology. Known multi-modal breast tumor segmentation methods extracted 2D tumor features and used information from one modal to assist another. However, these methods were not conducive to fusing multi-modal information efficiently, or may even fuse interference information, due to the lack of effective information interaction management between different modalities. Besides, these methods did not consider the effect of small tumor characteristics on the segmentation results. In this paper, We propose a new inter-modality information interaction network to segment breast tumors in 3D multi-modal MRI. Our network employs a hierarchical structure to extract local information of small tumors, which facilitates precise segmentation of tumor boundaries. Under this structure, we present a 3D tiny object segmentation network based on DenseVoxNet to preserve the boundary details of the segmented tumors (especially for small tumors). Further, we introduce a bi-directional request-supply information interaction module between different modalities so that each modal can request helpful auxiliary information according to its own needs. Experiments on a clinical 3D multi-modal MRI breast tumor dataset show that our new 3D IMIIN is superior to state-of-the-art methods and attains better segmentation results, suggesting that our new method has a good clinical application prospect.


Assuntos
Neoplasias da Mama , Imageamento por Ressonância Magnética , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos
7.
IEEE J Biomed Health Inform ; 26(2): 614-625, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34161249

RESUMO

Liver tumor segmentation (LiTS) is of primary importance in diagnosis and treatment of hepatocellular carcinoma. Known automated LiTS methods could not yield satisfactory results for clinical use since they were hard to model flexible tumor shapes and locations. In clinical practice, radiologists usually estimate tumor shape and size by a Response Evaluation Criteria in Solid Tumor (RECIST) mark. Inspired by this, in this paper, we explore a deep learning (DL) based interactive LiTS method, which incorporates guidance from user-provided RECIST marks. Our method takes a three-step framework to predict liver tumor boundaries. Under this architecture, we develop a RECIST mark propagation network (RMP-Net) to estimate RECIST-like marks in off-RECIST slices. We also devise a context-guided boundary-sensitive network (CGBS-Net) to distill tumors' contextual and boundary information from corresponding RECIST(-like) marks, and then predict tumor maps. To further refine the segmentation results, we process the tumor maps using a 3D conditional random field (CRF) algorithm and a morphology hole-filling operation. Verified on two clinical contrast-enhanced abdomen computed tomography (CT) image datasets, our proposed approach can produce promising segmentation results, and outperforms the state-of-the-art interactive segmentation methods.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/diagnóstico por imagem , Carcinoma Hepatocelular/terapia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/terapia , Critérios de Avaliação de Resposta em Tumores Sólidos , Tomografia Computadorizada por Raios X/métodos
8.
IEEE Trans Med Imaging ; 39(12): 3831-3842, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32746126

RESUMO

Metal artifacts commonly appear in computed tomography (CT) images of the patient body with metal implants and can affect disease diagnosis. Known deep learning and traditional metal trace restoring methods did not effectively restore details and sinogram consistency information in X-ray CT sinograms, hence often causing considerable secondary artifacts in CT images. In this paper, we propose a new cross-domain metal trace restoring network which promotes sinogram consistency while reducing metal artifacts and recovering tissue details in CT images. Our new approach includes a cross-domain procedure that ensures information exchange between the image domain and the sinogram domain in order to help them promote and complement each other. Under this cross-domain structure, we develop a hierarchical analytic network (HAN) to recover fine details of metal trace, and utilize the perceptual loss to guide HAN to concentrate on the absorption of sinogram consistency information of metal trace. To allow our entire cross-domain network to be trained end-to-end efficiently and reduce the graphic memory usage and time cost, we propose effective and differentiable forward projection (FP) and filtered back-projection (FBP) layers based on FP and FBP algorithms. We use both simulated and clinical datasets in three different clinical scenarios to evaluate our proposed network's practicality and universality. Both quantitative and qualitative evaluation results show that our new network outperforms state-of-the-art metal artifact reduction methods. In addition, the elapsed time analysis shows that our proposed method meets the clinical time requirement.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador , Algoritmos , Humanos , Metais , Imagens de Fantasmas , Tomografia Computadorizada por Raios X , Raios X
9.
Med Phys ; 47(9): 4087-4100, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32463485

RESUMO

PURPOSE: Metal implants in the patient's body can generate severe metal artifacts in x-ray computed tomography (CT) images. These artifacts may cover the tissues around the metal implants in CT images and even corrupt the tissue regions, thus affecting disease diagnosis using these images. Previous deep learning metal trace inpainting methods used both valid pixels of uncorrupted areas and invalid pixels of corrupted areas to patch metal trace (i.e., the holes of removed metal-corrupted regions). Such methods cannot recover fine details well and often suffer information mismatch due to interference of invalid pixels, thus incurring considerable secondary artifacts. In this paper, we develop a new irregular metal trace inpainting network for reducing metal artifacts. METHODS: We develop a new deep learning network to patch irregular metal trace in metal-corrupted sinograms to reduce metal artifacts for isometric fan-beam CT. Our new method patches irregular metal trace in CT sinograms using only valid pixels, avoiding interference from invalid pixels. Furthermore, to enable the inpainting network to recover as many details as possible, we design an auxiliary inpainting network to suppress the probable secondary artifacts in CT images to assist fine detail restoration. The image produced by the auxiliary network is then projected onto a sinogram via a forward projection (FP) algorithm and is fused with the sinogram predicted by the inpainting network in order to predict the final recovered sinogram. Our entire network is trained end-to-end to extract cross-domain information between the sinogram domain and CT image domain. RESULTS: We compare our proposed method with two traditional and four deep learning-based metal trace inpainting methods, and with an iterative reconstruction method on four datasets: dental fillings (panoramic and local perspectives), hip prostheses, and spine fixations. We use both quantitative and qualitative indices to evaluate our method, and the analyses suggest that our method reduces the most metal artifacts and produces the best quality CT images. Additionally, our proposed method takes 0.1512 s on average to process a CT slice, which meets the clinical requirement. CONCLUSIONS: This paper proposes a new deep learning network to patch irregular metal trace in corrupted sinograms to reduce metal artifacts. Our method restores more fine details in irregular metal trace and has a superior capability on metal artifact reduction compared with state-of-the-art methods.


Assuntos
Artefatos , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador , Metais , Imagens de Fantasmas , Raios X
10.
Med Phys ; 46(5): 2064-2073, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30927448

RESUMO

PURPOSE: Chest X-ray is one of the most common examinations for diagnosing heart and lung diseases. Due to the existing of a large number of clinical cases, many automated diagnosis algorithms based on chest X-ray images have been proposed. To our knowledge, almost none of the previous auto-diagnosis algorithms consider the effect of relative location information on disease incidence. In this study, we propose to use relative location information to assist the identification of thorax diseases. METHOD: In this work, U-Net is used to segment lung and heart from chest image. The relative location maps are computed through Euclidean distance transformation from segmented masks. By introducing the relative location information into the network, the usual location of disease is combined with the incidence. The proposed network is the fusion of two branches: mask branch and image branch. A mask branch is designed to be a bottom-up and top-down structure to extract relative location information. The structure has a large receptive field, which can extract more information for large lesion and contextual information for small lesion. The features learned from mask branch are fused with image branch, which is a 121-layers DenseNet. RESULTS: We compare our proposed method with four state-of-the-art methods on the largest public chest X-ray dataset: ChestX-ray14. The proposed method achieves the area under a curve of 0.820, which outperforms all the existing models and algorithms. CONCLUSION: This paper proposed a dense network with relative location information to identify thorax disease. The method combines the usual location of disease with the incidence for the first time and performs good.


Assuntos
Algoritmos , Diagnóstico por Computador/métodos , Pneumopatias/diagnóstico , Redes Neurais de Computação , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Torácica/métodos , Doenças Torácicas/diagnóstico , Bases de Dados Factuais , Humanos , Pneumopatias/diagnóstico por imagem , Reconhecimento Automatizado de Padrão , Valor Preditivo dos Testes , Doenças Torácicas/diagnóstico por imagem
12.
Biomed Eng Online ; 16(1): 8, 2017 Jan 10.
Artigo em Inglês | MEDLINE | ID: mdl-28086888

RESUMO

BACKGROUND: To improve the accuracy of ultrasound-guided biopsy of the prostate, the non-rigid registration of magnetic resonance (MR) images onto transrectal ultrasound (TRUS) images has gained increasing attention. Mutual information (MI) is a widely used similarity criterion in MR-TRUS image registration. However, the use of MI has been challenged because of intensity distortion, noise and down-sampling. Hence, we need to improve the MI measure to get better registration effect. METHODS: We present a novel two-dimensional non-rigid MR-TRUS registration algorithm that uses correlation ratio-based mutual information (CRMI) as the similarity criterion. CRMI includes a functional mapping of intensity values on the basis of a generalized version of intensity class correspondence. We also analytically acquire the derivative of CRMI with respect to deformation parameters. Furthermore, we propose an improved stochastic gradient descent (ISGD) optimization method based on the Metropolis acceptance criteria to improve the global optimization ability and decrease the registration time. RESULTS: The performance of the proposed method is tested on synthetic images and 12 pairs of clinical prostate TRUS and MR images. By comparing label map registration frame (LMRF) and conditional mutual information (CMI), the proposed algorithm has a significant improvement in the average values of Hausdorff distance and target registration error. Although the average Dice Similarity coefficient is not significantly better than CMI, it still has a crucial increase over LMRF. The average computation time consumed by the proposed method is similar to LMRF, which is 16 times less than CMI. CONCLUSION: With more accurate matching performance and lower sensitivity to noise and down-sampling, the proposed algorithm of minimizing CRMI by ISGD is more robust and has the potential for use in aligning TRUS and MR images for needle biopsy.


Assuntos
Biópsia/métodos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Próstata/diagnóstico por imagem , Próstata/patologia , Reto , Cirurgia Assistida por Computador/métodos , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Ultrassonografia
13.
Biomed Eng Online ; 16(1): 1, 2017 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-28086973

RESUMO

BACKGROUND: Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. METHODS: In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. RESULTS: By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. CONCLUSIONS: No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador/métodos , Metais , Tomografia Computadorizada por Raios X , Algoritmos , Prótese Dentária , Difusão , Prótese de Quadril , Humanos , Distribuição Normal
14.
Biomed Res Int ; 2016: 2180457, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27725935

RESUMO

Low-dose computed tomography (CT) reconstruction is a challenging problem in medical imaging. To complement the standard filtered back-projection (FBP) reconstruction, sparse regularization reconstruction gains more and more research attention, as it promises to reduce radiation dose, suppress artifacts, and improve noise properties. In this work, we present an iterative reconstruction approach using improved smoothed l0 (SL0) norm regularization which is used to approximate l0 norm by a family of continuous functions to fully exploit the sparseness of the image gradient. Due to the excellent sparse representation of the reconstruction signal, the desired tissue details are preserved in the resulting images. To evaluate the performance of the proposed SL0 regularization method, we reconstruct the simulated dataset acquired from the Shepp-Logan phantom and clinical head slice image. Additional experimental verification is also performed with two real datasets from scanned animal experiment. Compared to the referenced FBP reconstruction and the total variation (TV) regularization reconstruction, the results clearly reveal that the presented method has characteristic strengths. In particular, it improves reconstruction quality via reducing noise while preserving anatomical features.


Assuntos
Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Animais , Artefatos , Simulação por Computador , Cabeça/diagnóstico por imagem , Humanos , Camundongos , Modelos Estatísticos , Imagens de Fantasmas , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Razão Sinal-Ruído
15.
Biomed Eng Online ; 15(1): 66, 2016 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-27316680

RESUMO

BACKGROUND: In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. METHODS: In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). RESULTS: Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. CONCLUSION: The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Doses de Radiação , Tomografia Computadorizada por Raios X , Humanos , Análise dos Mínimos Quadrados , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...