Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Entropy (Basel) ; 24(5)2022 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35626628

RESUMO

Alexandre Huat, Sébastien Thureau, David Pasquier, Isabelle Gardin, Romain Modzelewski, David Gibon, Juliette Thariat and Vincent Grégoire were not included as authors in the original publication [...].

2.
Entropy (Basel) ; 24(4)2022 03 22.
Artigo em Inglês | MEDLINE | ID: mdl-35455101

RESUMO

In this paper, we propose to quantitatively compare loss functions based on parameterized Tsallis-Havrda-Charvat entropy and classical Shannon entropy for the training of a deep network in the case of small datasets which are usually encountered in medical applications. Shannon cross-entropy is widely used as a loss function for most neural networks applied to the segmentation, classification and detection of images. Shannon entropy is a particular case of Tsallis-Havrda-Charvat entropy. In this work, we compare these two entropies through a medical application for predicting recurrence in patients with head-neck and lung cancers after treatment. Based on both CT images and patient information, a multitask deep neural network is proposed to perform a recurrence prediction task using cross-entropy as a loss function and an image reconstruction task. Tsallis-Havrda-Charvat cross-entropy is a parameterized cross-entropy with the parameter α. Shannon entropy is a particular case of Tsallis-Havrda-Charvat entropy for α=1. The influence of this parameter on the final prediction results is studied. In this paper, the experiments are conducted on two datasets including in total 580 patients, of whom 434 suffered from head-neck cancers and 146 from lung cancers. The results show that Tsallis-Havrda-Charvat entropy can achieve better performance in terms of prediction accuracy with some values of α.

3.
J Imaging ; 9(4)2023 Apr 13.
Artigo em Inglês | MEDLINE | ID: mdl-37103232

RESUMO

Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a solution by artificially increasing the number of training samples, but these techniques often produce limited and unconvincing results. To address this issue, a growing number of studies have proposed the use of deep generative models to generate more realistic and diverse data that conform to the true distribution of the data. In this review, we focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models. We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation. We also evaluate the strengths and limitations of each model and suggest directions for future research in this field. Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis.

4.
J Med Imaging (Bellingham) ; 9(1): 014001, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-35024379

RESUMO

Purpose: Multisource images are interesting in medical imaging. Indeed, multisource images enable the use of complementary information of different sources such as for T1 and T2 modalities in MRI imaging. However, such multisource data can also be subject to redundancy and correlation. The question is how to efficiently fuse the multisource information without reinforcing the redundancy. We propose a method for segmenting multisource images that are statistically correlated. Approach: The method that we propose is the continuation of a prior work in which we introduce the copula model in hidden Markov fields (HMF). To achieve the multisource segmentations, we use a functional measure of dependency called "copula." This copula is incorporated in the conditionally random fields (CRF). Contrary to HMF, where we consider a prior knowledge on the hidden states modeled by an HMF, in CRF, there is no prior information and only the distribution of the hidden states conditionally to the observations can be known. This conditional distribution depends on the data and can be modeled by an energy function composed of two terms. The first one groups the voxels having similar intensities in the same class. As for the second term, it encourages a pair of voxels to be in the same class if the difference between their intensities is not too big. Results: A comparison between HMF and CRF is performed via theory and experimentations using both simulated and real data from BRATS 2013. Moreover, our method is compared with different state-of-the-art methods, which include supervised (convolutional neural networks) and unsupervised (hierarchical MRF). Our unsupervised method gives similar results as decision trees for synthetic images and as convolutional neural networks for real images; both methods are supervised. Conclusions: We compare two statistical methods using the copula: HMF and CRF to deal with multicorrelated images. We demonstrate the interest of using copula. In both models, the copula considerably improves the results compared with individual segmentations.

5.
Comput Med Imaging Graph ; 70: 1-7, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30253305

RESUMO

The detection and delineation of the lymphoma volume are a critical step for its treatment and its outcome prediction. Positron Emission Tomography (PET) is widely used for lymphoma detection. Two common types of approaches can be distinguished for lymphoma detection and segmentation in PET. The first one is ROI dependent which needs a ROI defined by physicians. The second one is based on machine learning methods which need a large learning database. However, such a large standard database is quite rare in medical field. Considering these problems, we propose a new approach that combines PET (metabolic information) with CT (anatomical information). Our approach is semi-automatic, it consists of three steps. First, an anatomical multi-atlas segmentation is applied on CT to locate and remove the organs having physiologic hypermetabolism in PET. Then, CRFs (Conditional Random Fields) detect and segment a set of possible lymphoma volumes in PET. The conditional probabilities used in CRFs are usually estimated by a learning step. In this work, we propose to estimate them in an unsupervised way. The final step is to visualize the detected lymphoma volumes and select the real ones by simply clicking on them. The false detection is low thanks to the first step. Our method is tested on 11 patients. The rate of good detection of lymphoma is 100%. The average of Dice indexes for measuring the lymphoma segmentation performance is 84.4% compared to the manual lymphoma segmentation. Comparing with other methods in terms of Dice index shows the best performance of our method.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Linfoma/diagnóstico por imagem , Algoritmos , Anatomia Artística , Atlas como Assunto , Humanos , Tomografia por Emissão de Pósitrons/métodos
6.
IEEE Trans Image Process ; 26(7): 3187-3195, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28333631

RESUMO

Nowadays, multi-source image acquisition attracts an increasing interest in many fields, such as multi-modal medical image segmentation. Such acquisition aims at considering complementary information to perform image segmentation, since the same scene has been observed by various types of images. However, strong dependence often exists between multi-source images. This dependence should be taken into account when we try to extract joint information for precisely making a decision. In order to statistically model this dependence between multiple sources, we propose a novel multi-source fusion method based on the Gaussian copula. The proposed fusion model is integrated in a statistical framework with the hidden Markov field inference in order to delineate a target volume from multi-source images. Estimation of parameters of the models and segmentation of the images are jointly performed by an iterative algorithm based on Gibbs sampling. Experiments are performed on multi-sequence MRI to segment tumors. The results show that the proposed method based on the Gaussian copula is effective to accomplish multi-source image segmentation.

7.
Med Phys ; 44(11): 5835-5848, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28837224

RESUMO

PURPOSE: The purpose of this study was to investigate the use of a probabilistic quad-tree graph (hidden Markov tree, HMT) to provide fast computation, robustness and an interpretational framework for multimodality image processing and to evaluate this framework for single gross tumor target (GTV) delineation from both positron emission tomography (PET) and computed tomography (CT) images. METHODS: We exploited joint statistical dependencies between hidden states to handle the data stack using multi-observation, multi-resolution of HMT and Bayesian inference. This framework was applied to segmentation of lung tumors in PET/CT datasets taking into consideration simultaneously the CT and the PET image information. PET and CT images were considered using either the original voxels intensities, or after wavelet/contourlet enhancement. The Dice similarity coefficient (DSC), sensitivity (SE), positive predictive value (PPV) were used to assess the performance of the proposed approach on one simulated and 15 clinical PET/CT datasets of non-small cell lung cancer (NSCLC) cases. The surrogate of truth was a statistical consensus (obtained with the Simultaneous Truth and Performance Level Estimation algorithm) of three manual delineations performed by experts on fused PET/CT images. The proposed framework was applied to PET-only, CT-only and PET/CT datasets, and were compared to standard and improved fuzzy c-means (FCM) multimodal implementations. RESULTS: A high agreement with the consensus of manual delineations was observed when using both PET and CT images. Contourlet-based HMT led to the best results with a DSC of 0.92 ± 0.11 compared to 0.89 ± 0.13 and 0.90 ± 0.12 for Intensity-based HMT and Wavelet-based HMT, respectively. Considering PET or CT only in the HMT led to much lower accuracy. Standard and improved FCM led to comparatively lower accuracy than HMT, even when considering multimodal implementations. CONCLUSIONS: We evaluated the accuracy of the proposed HMT-based framework for PET/CT image segmentation. The proposed method reached good accuracy, especially with pre-processing in the contourlet domain.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Cadeias de Markov , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Humanos , Análise de Ondaletas
8.
Med Phys ; 42(10): 5720-34, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26429246

RESUMO

PURPOSE: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. METHODS: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumor-Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). RESULTS: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% ± 9% vs 14% ± 7%, p = 0.15) and higher than FLICM (45% ± 18%, p < 0.0001), and improved accuracy with lower CE (14% ± 11%) over both FLICM (29% ± 29%) and FLAB (22% ± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% ± 11% for SPEQTACLE vs 28% ± 22% for FLAB (p = 0.009) and 40% ± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% ± 6% vs 37% ± 14% and 30% ± 17%, p < 0.004). CONCLUSIONS: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.


Assuntos
Algoritmos , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Lógica Fuzzy , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Tomografia por Emissão de Pósitrons , Automação , Humanos , Imagens de Fantasmas , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa