Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Med Phys ; 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39078069

RESUMO

BACKGROUND: Deep learning (DL) techniques have been extensively applied in medical image classification. The unique characteristics of medical imaging data present challenges, including small labeled datasets, severely imbalanced class distribution, and significant variations in imaging quality. Recently, generative adversarial network (GAN)-based classification methods have gained attention for their ability to enhance classification accuracy by incorporating realistic GAN-generated images as data augmentation. However, the performance of these GAN-based methods often relies on high-quality generated images, while large amounts of training data are required to train GAN models to achieve optimal performance. PURPOSE: In this study, we propose an adversarial learning-based classification framework to achieve better classification performance. Innovatively, GAN models are employed as supplementary regularization terms to support classification, aiming to address the challenges described above. METHODS: The proposed classification framework, GAN-DL, consists of a feature extraction network (F-Net), a classifier, and two adversarial networks, specifically a reconstruction network (R-Net) and a discriminator network (D-Net). The F-Net extracts features from input images, and the classifier uses these features for classification tasks. R-Net and D-Net have been designed following the GAN architecture. R-Net employs the extracted feature to reconstruct the original images, while D-Net is tasked with the discrimination between the reconstructed image and the original images. An iterative adversarial learning strategy is designed to guide model training by incorporating multiple network-specific loss functions. These loss functions, serving as supplementary regularization, are automatically derived during the reconstruction process and require no additional data annotation. RESULTS: To verify the model's effectiveness, we performed experiments on two datasets, including a COVID-19 dataset with 13 958 chest x-ray images and an oropharyngeal squamous cell carcinoma (OPSCC) dataset with 3255 positron emission tomography images. Thirteen classic DL-based classification methods were implemented on the same datasets for comparison. Performance metrics included precision, sensitivity, specificity, and F 1 $F_1$ -score. In addition, we conducted ablation studies to assess the effects of various factors on model performance, including the network depth of F-Net, training image size, training dataset size, and loss function design. Our method achieved superior performance than all comparative methods. On the COVID-19 dataset, our method achieved 95.4 % ± 0.6 % $95.4\%\pm 0.6\%$ , 95.3 % ± 0.9 % $95.3\%\pm 0.9\%$ , 97.7 % ± 0.4 % $97.7\%\pm 0.4\%$ , and 95.3 % ± 0.9 % $95.3\%\pm 0.9\%$ in terms of precision, sensitivity, specificity, and F 1 $F_1$ -score, respectively. It achieved 96.2 % ± 0.7 % $96.2\%\pm 0.7\%$ across all these metrics on the OPSCC dataset. The study to investigate the effects of two adversarial networks highlights the crucial role of D-Net in improving model performance. Ablation studies further provide an in-depth understanding of our methodology. CONCLUSION: Our adversarial-based classification framework leverages GAN-based adversarial networks and an iterative adversarial learning strategy to harness supplementary regularization during training. This design significantly enhances classification accuracy and mitigates overfitting issues in medical image datasets. Moreover, its modular design not only demonstrates flexibility but also indicates its potential applicability to various clinical contexts and medical imaging applications.

2.
Int J Comput Assist Radiol Surg ; 19(2): 273-281, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37796413

RESUMO

PURPOSE: Fully convolutional neural networks architectures have proven to be useful for brain tumor segmentation tasks. However, their performance in learning long-range dependencies is limited to their localized receptive fields. On the other hand, vision transformers (ViTs), essentially based on a multi-head self-attention mechanism, which generates attention maps to aggregate spatial information dynamically, have outperformed convolutional neural networks (CNNs). Inspired by the recent success of ViT models for the medical images segmentation, we propose in this paper a new network based on Swin transformer for semantic brain tumor segmentation. METHODS: The proposed method for brain tumor segmentation combines Transformer and CNN modules as an encoder-decoder structure. The encoder incorporates ELSA transformer blocks used to enhance local detailed feature extraction. The extracted feature representations are fed to the decoder part via skip connections. The encoder part includes channel squeeze and spatial excitation blocks, which enable the extracted features to be more informative both spatially and channel-wise. RESULTS: The method is evaluated on the public BraTS 2021 datasets containing 1251 cases of brain images, each with four 3D MRI modalities. Our proposed approach achieved excellent segmentation results with an average Dice score of 89.77% and an average Hausdorff distance of 8.90 mm. CONCLUSION: We developed an automated framework for brain tumor segmentation using Swin transformer and enhanced local self-attention. Experimental results show that our method outperforms state-of-th-art 3D algorithms for brain tumor segmentation.


Assuntos
Neoplasias Encefálicas , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo , Algoritmos , Aprendizagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador
3.
Head Neck Tumor Chall (2022) ; 13626: 1-30, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37195050

RESUMO

This paper presents an overview of the third edition of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge, organized as a satellite event of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2022. The challenge comprises two tasks related to the automatic analysis of FDG-PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the fully automatic segmentation of H&N primary Gross Tumor Volume (GTVp) and metastatic lymph nodes (GTVn) from FDG-PET/CT images. Task 2 is the fully automatic prediction of Recurrence-Free Survival (RFS) from the same FDG-PET/CT and clinical data. The data were collected from nine centers for a total of 883 cases consisting of FDG-PET/CT images and clinical information, split into 524 training and 359 test cases. The best methods obtained an aggregated Dice Similarity Coefficient (DSCagg) of 0.788 in Task 1, and a Concordance index (C-index) of 0.682 in Task 2.

4.
Comput Med Imaging Graph ; 106: 102218, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-36947921

RESUMO

Brain tumor is one of the leading causes of cancer death. The high-grade brain tumors are easier to recurrent even after standard treatment. Therefore, developing a method to predict brain tumor recurrence location plays an important role in the treatment planning and it can potentially prolong patient's survival time. There is still little work to deal with this issue. In this paper, we present a deep learning-based brain tumor recurrence location prediction network. Since the dataset is usually small, we propose to use transfer learning to improve the prediction. We first train a multi-modal brain tumor segmentation network on the public dataset BraTS 2021. Then, the pre-trained encoder is transferred to our private dataset for extracting the rich semantic features. Following that, a multi-scale multi-channel feature fusion model and a nonlinear correlation learning module are developed to learn the effective features. The correlation between multi-channel features is modeled by a nonlinear equation. To measure the similarity between the distributions of original features of one modality and the estimated correlated features of another modality, we propose to use Kullback-Leibler divergence. Based on this divergence, a correlation loss function is designed to maximize the similarity between the two feature distributions. Finally, two decoders are constructed to jointly segment the present brain tumor and predict its future tumor recurrence location. To the best of our knowledge, this is the first work that can segment the present tumor and at the same time predict future tumor recurrence location, making the treatment planning more efficient and precise. The experimental results demonstrated the effectiveness of our proposed method to predict the brain tumor recurrence location from the limited dataset.


Assuntos
Neoplasias Encefálicas , Recidiva Local de Neoplasia , Humanos , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo , Processamento de Imagem Assistida por Computador
5.
Comput Med Imaging Graph ; 104: 102167, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36584536

RESUMO

Multimodal MR brain tumor segmentation is one of the hottest issues in the community of medical image processing. However, acquiring the complete set of MR modalities is not always possible in clinical practice, due to the acquisition protocols, image corruption, scanner availability, scanning cost or allergies to certain contrast materials. The missing information can cause some restraints to brain tumor diagnosis, monitoring, treatment planning and prognosis. Thus, it is highly desirable to develop brain tumor segmentation methods to address the missing modalities problem. Based on the recent advancements, in this review, we provide a detailed analysis of the missing modality issue in MR-based brain tumor segmentation. First, we briefly introduce the biomedical background concerning brain tumor, MR imaging techniques, and the current challenges in brain tumor segmentation. Then, we provide a taxonomy of the state-of-the-art methods with five categories, namely, image synthesis-based method, latent feature space-based model, multi-source correlation-based method, knowledge distillation-based method, and domain adaptation-based method. In addition, the principles, architectures, benefits and limitations are elaborated in each method. Following that, the corresponding datasets and widely used evaluation metrics are described. Finally, we analyze the current challenges and provide a prospect for future development trends. This review aims to provide readers with a thorough knowledge of the recent contributions in the field of brain tumor segmentation with missing modalities and suggest potential future directions.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Encéfalo , Imagem Multimodal/métodos
6.
Comput Biol Med ; 151(Pt A): 106230, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36306574

RESUMO

Accurate lymphoma segmentation in PET/CT images is important for evaluating Diffuse Large B-Cell Lymphoma (DLBCL) prognosis. As systemic multiple lymphomas, DLBCL lesions vary in number and size for different patients, which makes DLBCL labeling labor-intensive and time-consuming. To reduce the reliance on accurately labeled datasets, a weakly supervised deep learning method based on multi-scale feature similarity is proposed for automatic lymphoma segmentation. Weak labeling was performed by randomly dawning a small and salient lymphoma volume for the patient without accurate labels. A 3D V-Net is used as the backbone of the segmentation network and image features extracted in different convolutional layers are fused with the Atrous Spatial Pyramid Pooling (ASPP) module to generate multi-scale feature representations of input images. By imposing multi-scale feature consistency constraints on the predicted tumor regions as well as the labeled tumor regions, weakly labeled data can also be effectively used for network training. The cosine similarity, which has strong generalization, is exploited here to measure feature distances. The proposed method is evaluated with a PET/CT dataset of 147 lymphoma patients. Experimental results show that when using data, half of which have accurate labels and the other half have weak labels, the proposed method performed similarly to a fully supervised segmentation network and achieved an average Dice Similarity Coefficient (DSC) of 71.47%. The proposed method is able to reduce the requirement for expert annotations in deep learning-based lymphoma segmentation.


Assuntos
Linfoma , Neoplasias , Humanos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Linfoma/diagnóstico por imagem
7.
Comput Biol Med ; 151(Pt A): 106208, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36306580

RESUMO

BACKGROUND AND OBJECTIVES: Predicting patient response to treatment and survival in oncology is a prominent way towards precision medicine. To this end, radiomics has been proposed as a field of study where images are used instead of invasive methods. The first step in radiomic analysis in oncology is lesion segmentation. However, this task is time consuming and can be physician subjective. Automated tools based on supervised deep learning have made great progress in helping physicians. However, they are data hungry, and annotated data remains a major issue in the medical field where only a small subset of annotated images are available. METHODS: In this work, we propose a multi-task, multi-scale learning framework to predict patient's survival and response. We show that the encoder can leverage multiple tasks to extract meaningful and powerful features that improve radiomic performance. We also show that subsidiary tasks serve as an inductive bias so that the model can better generalize. RESULTS: Our model was tested and validated for treatment response and survival in esophageal and lung cancers, with an area under the ROC curve of 77% and 71% respectively, outperforming single-task learning methods. CONCLUSIONS: Multi-task multi-scale learning enables higher performance of radiomic analysis by extracting rich information from intratumoral and peritumoral regions.


Assuntos
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/patologia , Imageamento Tridimensional , Curva ROC , Tomografia por Emissão de Pósitrons/métodos
8.
J Imaging ; 8(5)2022 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-35621894

RESUMO

It is proven that radiomic characteristics extracted from the tumor region are predictive. The first step in radiomic analysis is the segmentation of the lesion. However, this task is time consuming and requires a highly trained physician. This process could be automated using computer-aided detection (CAD) tools. Current state-of-the-art methods are trained in a supervised learning setting, which requires a lot of data that are usually not available in the medical imaging field. The challenge is to train one model to segment different types of tumors with only a weak segmentation ground truth. In this work, we propose a prediction framework including a 3D tumor segmentation in positron emission tomography (PET) images, based on a weakly supervised deep learning method, and an outcome prediction based on a 3D-CNN classifier applied to the segmented tumor regions. The key step is to locate the tumor in 3D. We propose to (1) calculate two maximum intensity projection (MIP) images from 3D PET images in two directions, (2) classify the MIP images into different types of cancers, (3) generate the class activation maps through a multitask learning approach with a weak prior knowledge, and (4) segment the 3D tumor region from the two 2D activation maps with a proposed new loss function for the multitask. The proposed approach achieves state-of-the-art prediction results with a small data set and with a weak segmentation ground truth. Our model was tested and validated for treatment response and survival in lung and esophageal cancers on 195 patients, with an area under the receiver operating characteristic curve (AUC) of 67% and 59%, respectively, and a dice coefficient of 73% and 0.77% for tumor segmentation.

9.
IEEE Trans Radiat Plasma Med Sci ; 6(2): 231-244, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35520102

RESUMO

Predicting early in treatment whether a tumor is likely to be responsive is a difficult yet important task to support clinical decision-making. Studies have shown that multimodal biomarkers could provide complementary information and lead to more accurate treatment outcome prognosis than unimodal biomarkers. However, the prognosis accuracy could be affected by multimodal data heterogeneity and incompleteness. The small-sized and imbalance datasets also bring additional challenges for training a designed prognosis model. In this study, a modular framework employing multimodal biomarkers for cancer treatment outcome prediction was proposed. It includes four modules of synthetic data generation, deep feature extraction, multimodal feature fusion, and classification to address the challenges described above. The feasibility and advantages of the designed framework were demonstrated through an example study, in which the goal was to stratify oropharyngeal squamous cell carcinoma (OPSCC) patients with low- and high-risks of treatment failures by use of positron emission tomography (PET) image data and microRNA (miRNA) biomarkers. The superior prognosis performance and the comparison with other methods demonstrated the efficiency of the proposed framework and its ability of enabling seamless integration, validation and comparison of various algorithms in each module of the framework. The limitation and future work was discussed as well.

10.
Entropy (Basel) ; 24(5)2022 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35626628

RESUMO

Alexandre Huat, Sébastien Thureau, David Pasquier, Isabelle Gardin, Romain Modzelewski, David Gibon, Juliette Thariat and Vincent Grégoire were not included as authors in the original publication [...].

11.
Entropy (Basel) ; 24(4)2022 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-35455101

RESUMO

In this paper, we propose to quantitatively compare loss functions based on parameterized Tsallis-Havrda-Charvat entropy and classical Shannon entropy for the training of a deep network in the case of small datasets which are usually encountered in medical applications. Shannon cross-entropy is widely used as a loss function for most neural networks applied to the segmentation, classification and detection of images. Shannon entropy is a particular case of Tsallis-Havrda-Charvat entropy. In this work, we compare these two entropies through a medical application for predicting recurrence in patients with head-neck and lung cancers after treatment. Based on both CT images and patient information, a multitask deep neural network is proposed to perform a recurrence prediction task using cross-entropy as a loss function and an image reconstruction task. Tsallis-Havrda-Charvat cross-entropy is a parameterized cross-entropy with the parameter α. Shannon entropy is a particular case of Tsallis-Havrda-Charvat entropy for α=1. The influence of this parameter on the final prediction results is studied. In this paper, the experiments are conducted on two datasets including in total 580 patients, of whom 434 suffered from head-neck cancers and 146 from lung cancers. The results show that Tsallis-Havrda-Charvat entropy can achieve better performance in terms of prediction accuracy with some values of α.

12.
J Med Imaging (Bellingham) ; 9(1): 014001, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35024379

RESUMO

Purpose: Multisource images are interesting in medical imaging. Indeed, multisource images enable the use of complementary information of different sources such as for T1 and T2 modalities in MRI imaging. However, such multisource data can also be subject to redundancy and correlation. The question is how to efficiently fuse the multisource information without reinforcing the redundancy. We propose a method for segmenting multisource images that are statistically correlated. Approach: The method that we propose is the continuation of a prior work in which we introduce the copula model in hidden Markov fields (HMF). To achieve the multisource segmentations, we use a functional measure of dependency called "copula." This copula is incorporated in the conditionally random fields (CRF). Contrary to HMF, where we consider a prior knowledge on the hidden states modeled by an HMF, in CRF, there is no prior information and only the distribution of the hidden states conditionally to the observations can be known. This conditional distribution depends on the data and can be modeled by an energy function composed of two terms. The first one groups the voxels having similar intensities in the same class. As for the second term, it encourages a pair of voxels to be in the same class if the difference between their intensities is not too big. Results: A comparison between HMF and CRF is performed via theory and experimentations using both simulated and real data from BRATS 2013. Moreover, our method is compared with different state-of-the-art methods, which include supervised (convolutional neural networks) and unsupervised (hierarchical MRF). Our unsupervised method gives similar results as decision trees for synthetic images and as convolutional neural networks for real images; both methods are supervised. Conclusions: We compare two statistical methods using the copula: HMF and CRF to deal with multicorrelated images. We demonstrate the interest of using copula. In both models, the copula considerably improves the results compared with individual segmentations.

13.
PET Clin ; 17(1): 183-212, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34809866

RESUMO

Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.


Assuntos
Inteligência Artificial , Neoplasias , Diagnóstico por Imagem , Humanos , Neoplasias/diagnóstico por imagem , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Prognóstico
14.
Bioinformatics ; 37(19): 3106-3114, 2021 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-34237137

RESUMO

MOTIVATION: Predicting early in treatment whether a tumor is likely to respond to treatment is one of the most difficult yet important tasks in providing personalized cancer care. Most oropharyngeal squamous cell carcinoma (OPSCC) patients receive standard cancer therapy. However, the treatment outcomes vary significantly and are difficult to predict. Multiple studies indicate that microRNAs (miRNAs) are promising cancer biomarkers for the prognosis of oropharyngeal cancer. The reliable and efficient use of miRNAs for patient stratification and treatment outcome prognosis is still a very challenging task, mainly due to the relatively high dimensionality of miRNAs compared to the small number of observation sets; the redundancy, irrelevancy and uncertainty in the large amount of miRNAs; and the imbalanced observation patient samples. RESULTS: In this study, a new machine learning-based prognosis model was proposed to stratify subsets of OPSCC patients with low and high risks for treatment failure. The model cascaded a two-stage prognostic biomarker selection method and an evidential K-nearest neighbors classifier to address the challenges and improve the accuracy of patient stratification. The model has been evaluated on miRNA expression profiling of 150 oropharyngeal tumors by use of overall survival and disease-specific survival as the end points of disease treatment outcomes, respectively. The proposed method showed superior performance compared to other advanced machine-learning methods in terms of common performance quantification metrics. The proposed prognosis model can be employed as a supporting tool to identify patients who are likely to fail standard therapy and potentially benefit from alternative targeted treatments.Availability and implementation: Code is available in https://github.com/shenghh2015/mRMR-BFT-outcome-prediction.

15.
Zhongguo Shi Yan Xue Ye Xue Za Zhi ; 29(3): 931-936, 2021 Jun.
Artigo em Chinês | MEDLINE | ID: mdl-34105496

RESUMO

OBJECTIVE: To explore the kinetics of infiltrated T cell in murine acute graft-versus-host disease (aGVHD) target organs after allogeneic hematopoietic stem cell transplantation (allo-HSCT) and its relationship with tissue pathological damage and aGVHD progress. METHODS: Male C57BL/6 (H-2Kb) mice at age of 8-10 weeks were selected as donors, from which splenic cells and bone marrow cells were isolated. And 10-12 weeks of BALB/c (H-2Kd) male mice which received 7.5 Gy total body irradiation (TBI) were recipients to transplant. Recipients were randomly divided into allogeneic bone marrow transplantation (BMT) group and BMT+T group, which were transplanted bone marrow cells with or without splenic cells, respectively. All recipients were daily monitored and the dynamic changes of the body weights along with clinical scores of aGVHD were detected. HE staining was used to investigate the pathological damage and score of aGVHD target organs. The number of infiltrated CD3+ T cells in target organs was numerated and statistically analyzed after immunohistochemistry staining on day 7, 14, 28, 40 and 47 after transplantation. RESULTS: Compared with BMT group, the number of infiltrated T cells in aGVHD target organs including liver, lung and gut increased since day 7 in BMT+T group (P<0.05). On day 14, 28, 40 and 47 after transplantation, more infiltrated CD3+ T cells were detected in target tissues of mice in BMT+T group than those in BMT group (P<0.05). Higher clinical score and histopathological score of target organs in aGVHD mice were detected (P<0.05). Positive correlation was found in the number of liver infiltrated T cells and pathological damage, and the numbers of infiltrated CD3+ T cells in gut were positively related to aGVHD clinical scores. CONCLUSION: Pathological damage of aGVHD target organs is induced by CD3+ T cell infiltration, and the number of infiltrated T cell may be an important evaluated index of aGVHD severity.


Assuntos
Doença Enxerto-Hospedeiro , Animais , Transplante de Medula Óssea , Cinética , Masculino , Camundongos , Camundongos Endogâmicos BALB C , Camundongos Endogâmicos C57BL , Linfócitos T , Transplante Homólogo
16.
IEEE Trans Image Process ; 30: 4263-4274, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33830924

RESUMO

Magnetic Resonance Imaging (MRI) is a widely used imaging technique to assess brain tumor. Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning. In addition, multi-modal MR images can provide complementary information for accurate brain tumor segmentation. However, it's common to miss some imaging modalities in clinical practice. In this paper, we present a novel brain tumor segmentation algorithm with missing modalities. Since it exists a strong correlation between multi-modalities, a correlation model is proposed to specially represent the latent multi-source correlation. Thanks to the obtained correlation representation, the segmentation becomes more robust in the case of missing modality. First, the individual representation produced by each encoder is used to estimate the modality independent parameter. Then, the correlation model transforms all the individual representations to the latent multi-source correlation representations. Finally, the correlation representations across modalities are fused via attention mechanism into a shared representation to emphasize the most important features for segmentation. We evaluate our model on BraTS 2018 and BraTS 2019 dataset, it outperforms the current state-of-the-art methods and produces robust results when one or more modalities are missing.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Imagem Multimodal/métodos , Algoritmos , Humanos
17.
Comput Med Imaging Graph ; 86: 101811, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33232843

RESUMO

This paper presents a 3D brain tumor segmentation network from multi-sequence MRI datasets based on deep learning. We propose a three-stage network: generating constraints, fusion under constraints and final segmentation. In the first stage, an initial 3D U-Net segmentation network is introduced to produce an additional context constraint for each tumor region. Under the obtained constraint, multi-sequence MRI are then fused using an attention mechanism to achieve three single tumor region segmentations. Considering the location relationship of the tumor regions, a new loss function is introduced to deal with the multiple class segmentation problem. Finally, a second 3D U-Net network is applied to combine and refine the three single prediction results. In each stage, only 8 initial filters are used, allowing to decrease significantly the number of parameters to be estimated. We evaluated our method on BraTS 2017 dataset. The results are promising in terms of dice score, hausdorff distance, and the amount of memory required for training.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética
18.
Comput Biol Med ; 126: 104037, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-33065387

RESUMO

This paper presents an automatic classification segmentation tool for helping screening COVID-19 pneumonia using chest CT imaging. The segmented lesions can help to assess the severity of pneumonia and follow-up the patients. In this work, we propose a new multitask deep learning model to jointly identify COVID-19 patient and segment COVID-19 lesion from chest CT images. Three learning tasks: segmentation, classification and reconstruction are jointly performed with different datasets. Our motivation is on the one hand to leverage useful information contained in multiple related tasks to improve both segmentation and classification performances, and on the other hand to deal with the problems of small data because each task can have a relatively small dataset. Our architecture is composed of a common encoder for disentangled feature representation with three tasks, and two decoders and a multi-layer perceptron for reconstruction, segmentation and classification respectively. The proposed model is evaluated and compared with other image segmentation techniques using a dataset of 1369 patients including 449 patients with COVID-19, 425 normal ones, 98 with lung cancer and 397 of different kinds of pathology. The obtained results show very encouraging performance of our method with a dice coefficient higher than 0.88 for the segmentation and an area under the ROC curve higher than 97% for the classification.


Assuntos
Betacoronavirus , Infecções por Coronavirus/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Pneumonia Viral/diagnóstico por imagem , Tomografia Computadorizada por Raios X , COVID-19 , Aprendizado Profundo , Feminino , Humanos , Masculino , Pandemias , SARS-CoV-2
19.
Int J Comput Assist Radiol Surg ; 14(10): 1715-1724, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31401714

RESUMO

PURPOSE: Lymphoma detection and segmentation from PET images are critical tasks for cancer staging and treatment monitoring. However, it is still a challenge owing to the complexities of lymphoma PET data themselves, and the huge computational burdens and memory requirements for 3D volume data. In this work, an entropy-based optimization strategy for clustering is proposed to detect and segment lymphomas in 3D PET images. METHODS: To reduce computational complexity and add more feature information, billions of voxels in 3D volume data are first aggregated into supervoxels. Then, such supervoxels serve as basic data units for further clustering by using DBSCAN algorithm, in which some new feature attributes based on physical spatial information and prior knowledge are proposed. In addition, more importantly, an entropy-based objective function is constructed to search the most appropriate parameters of DBSCAN to obtain the optimal clustering results by using a genetic algorithm. This step allows to automatically adapt the parameters to each patient. Finally, a series of comparison experiments among various feature attributes are performed. RESULTS: 48 patient data are conducted, showing the combination of three features, supervoxel intensity, geographic coordinates and organ distributions, can achieve good performance and the proposed entropy-based optimization scheme has more advantages than the existing methods. CONCLUSION: The proposed entropy-based optimization strategy for clustering by integrating physical spatial attributes and prior knowledge can achieve better performance than traditional methods.


Assuntos
Imageamento Tridimensional/métodos , Linfoma/diagnóstico por imagem , Tomografia por Emissão de Pósitrons/métodos , Algoritmos , Análise por Conglomerados , Entropia , Humanos , Estadiamento de Neoplasias/métodos
20.
J Med Imaging (Bellingham) ; 6(1): 014001, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30662925

RESUMO

Segmentation of organs at risk (OAR) in computed tomography (CT) is of vital importance in radiotherapy treatment. This task is time consuming and for some organs, it is very challenging due to low-intensity contrast in CT. We propose a framework to perform the automatic segmentation of multiple OAR: esophagus, heart, trachea, and aorta. Different from previous works using deep learning techniques, we make use of global localization information, based on an original distance map that yields not only the localization of each organ, but also the spatial relationship between them. Instead of segmenting directly the organs, we first generate the localization map by minimizing a reconstruction error within an adversarial framework. This map that includes localization information of all organs is then used to guide the segmentation task in a fully convolutional setting. Experimental results show encouraging performance on CT scans of 60 patients totaling 11,084 slices in comparison with other state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA