Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Med Image Anal ; 84: 102680, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36481607

RESUMEN

In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in http://medicaldecathlon.com/. In addition, both data and online evaluation are accessible via https://competitions.codalab.org/competitions/17094.


Asunto(s)
Benchmarking , Neoplasias Hepáticas , Humanos , Estudios Retrospectivos , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Hígado/diagnóstico por imagen , Hígado/patología , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
2.
Med Image Anal ; 82: 102624, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36208571

RESUMEN

An important challenge and limiting factor in deep learning methods for medical imaging segmentation is the lack of available of annotated data to properly train models. For the specific task of tumor segmentation, the process entails clinicians labeling every slice of volumetric scans for every patient, which becomes prohibitive at the scale of datasets required to train neural networks to optimal performance. To address this, we propose a novel semi-supervised framework that allows training any segmentation (encoder-decoder) model using only information readily available in radiological data, namely the presence of a tumor in the image, in addition to a few annotated images. Specifically, we conjecture that a generative model performing domain translation on this weak label - healthy vs diseased scans - helps achieve tumor segmentation. The proposed GenSeg method first disentangles tumoral tissue from healthy "background" tissue. The latent representation is separated into (1) the common background information across both domains, and (2) the unique tumoral information. GenSeg then achieves diseased-to-healthy image translation by decoding a healthy version of the image from just the common representation, as well as a residual image that allows adding back the tumors. The same decoder that produces this residual tumor image, also outputs a tumor segmentation. Implicit data augmentation is achieved by re-using the same framework for healthy-to-diseased image translation, where a residual tumor image is produced from a prior distribution. By performing both image translation and segmentation simultaneously, GenSeg allows training on only partially annotated datasets. To test the framework, we trained U-Net-like architectures using GenSeg and evaluated their performance on 3 variants of a synthetic task, as well as on 2 benchmark datasets: brain tumor segmentation in MRI (derived from BraTS) and liver metastasis segmentation in CT (derived from LiTS). Our method outperforms the baseline semi-supervised (autoencoder and mean teacher) and supervised segmentation methods, with improvements ranging between 8-14% Dice score on the brain task and 5-8% on the liver task, when only 1% of the training images were annotated. These results show the proposed framework is ideal at addressing the problem of training deep segmentation models when a large portion of the available data is unlabeled and unpaired, a common issue in tumor segmentation.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasia Residual , Redes Neurales de la Computación , Imagen por Resonancia Magnética
3.
Nat Commun ; 13(1): 4128, 2022 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-35840566

RESUMEN

International challenges have become the de facto standard for comparative assessment of image analysis algorithms. Although segmentation is the most widely investigated medical image processing task, the various challenges have been organized to focus only on specific clinical tasks. We organized the Medical Segmentation Decathlon (MSD)-a biomedical image analysis challenge, in which algorithms compete in a multitude of both tasks and modalities to investigate the hypothesis that a method capable of performing well on multiple tasks will generalize well to a previously unseen task and potentially outperform a custom-designed solution. MSD results confirmed this hypothesis, moreover, MSD winner continued generalizing well to a wide range of other clinical problems for the next two years. Three main conclusions can be drawn from this study: (1) state-of-the-art image segmentation algorithms generalize well when retrained on unseen tasks; (2) consistent algorithmic performance across multiple tasks is a strong surrogate of algorithmic generalizability; (3) the training of accurate AI segmentation models is now commoditized to scientists that are not versed in AI model training.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos
4.
J Appl Clin Med Phys ; 23(8): e13655, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35661390

RESUMEN

PURPOSE: External radiation therapy planning is a highly complex and tedious process as it involves treating large target volumes, prescribing several levels of doses, as well as avoiding irradiating critical structures such as organs at risk close to the tumor target. This requires highly trained dosimetrists and physicists to generate a personalized plan and adapt it as treatment evolves, thus affecting the overall tumor control and patient outcomes. Our aim is to achieve accurate dose predictions for head and neck (H&N) cancer patients on a challenging in-house dataset that reflects realistic variability and to further compare and validate the method on a public dataset. METHODS: We propose a three-dimensional (3D) deep neural network that combines a hierarchically dense architecture with an attention U-net (HDA U-net). We investigate a domain knowledge objective, incorporating a weighted mean squared error (MSE) with a dose-volume histogram (DVH) loss function. The proposed HDA U-net using the MSE-DVH loss function is compared with two state-of-the-art U-net variants on two radiotherapy datasets of H&N cases. These include reference dose plans, computed tomography (CT) information, organs at risk (OARs), and planning target volume (PTV) delineations. All models were evaluated using coverage, homogeneity, and conformity metrics as well as mean dose error and DVH curves. RESULTS: Overall, the proposed architecture outperformed the comparative state-of-the-art methods, reaching 0.95 (0.98) on D95 coverage, 1.06 (1.07) on the maximum dose value, 0.10 (0.08) on homogeneity, 0.53 (0.79) on conformity index, and attaining the lowest mean dose error on PTVs of 1.7% (1.4%) for the in-house (public) dataset. The improvements are statistically significant ( p < 0.05 $p<0.05$ ) for the homogeneity and maximum dose value compared with the closest baseline. All models offer a near real-time prediction, measured between 0.43 and 0.88 s per volume. CONCLUSION: The proposed method achieved similar performance on both realistic in-house data and public data compared to the attention U-net with a DVH loss, and outperformed other methods such as HD U-net and HDA U-net with standard MSE losses. The use of the DVH objective for training showed consistent improvements to the baselines on most metrics, supporting its added benefit in H&N cancer cases. The quick prediction time of the proposed method allows for real-time applications, providing physicians a method to generate an objective end goal for the dosimetrist to use as reference for planning. This could considerably reduce the number of iterations between the two expert physicians thus reducing the overall treatment planning time.


Asunto(s)
Neoplasias de Cabeza y Cuello , Radioterapia de Intensidad Modulada , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Órganos en Riesgo , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Radioterapia de Intensidad Modulada/métodos
5.
Sci Rep ; 12(1): 3183, 2022 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-35210482

RESUMEN

In radiation oncology, predicting patient risk stratification allows specialization of therapy intensification as well as selecting between systemic and regional treatments, all of which helps to improve patient outcome and quality of life. Deep learning offers an advantage over traditional radiomics for medical image processing by learning salient features from training data originating from multiple datasets. However, while their large capacity allows to combine high-level medical imaging data for outcome prediction, they lack generalization to be used across institutions. In this work, a pseudo-volumetric convolutional neural network with a deep preprocessor module and self-attention (PreSANet) is proposed for the prediction of distant metastasis, locoregional recurrence, and overall survival occurrence probabilities within the 10 year follow-up time frame for head and neck cancer patients with squamous cell carcinoma. The model is capable of processing multi-modal inputs of variable scan length, as well as integrating patient data in the prediction model. These proposed architectural features and additional modalities all serve to extract additional information from the available data when availability to additional samples is limited. This model was trained on the public Cancer Imaging Archive Head-Neck-PET-CT dataset consisting of 298 patients undergoing curative radio/chemo-radiotherapy and acquired from 4 different institutions. The model was further validated on an internal retrospective dataset with 371 patients acquired from one of the institutions in the training dataset. An extensive set of ablation experiments were performed to test the utility of the proposed model characteristics, achieving an AUROC of [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively on the public TCIA Head-Neck-PET-CT dataset. External validation was performed on a retrospective dataset with 371 patients, achieving [Formula: see text] AUROC in all outcomes. To test for model generalization across sites, a validation scheme consisting of single site-holdout and cross-validation combining both datasets was used. The mean accuracy across 4 institutions obtained was [Formula: see text], [Formula: see text] and [Formula: see text] for DM, LR and OS respectively. The proposed model demonstrates an effective method for tumor outcome prediction for multi-site, multi-modal combining both volumetric data and structured patient clinical data.


Asunto(s)
Carcinoma de Células Escamosas/diagnóstico por imagen , Diagnóstico por Computador/métodos , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Adulto , Anciano , Anciano de 80 o más Años , Atención , Biomarcadores de Tumor , Carcinoma de Células Escamosas/terapia , Aprendizaje Profundo , Femenino , Neoplasias de Cabeza y Cuello/terapia , Humanos , Masculino , Persona de Mediana Edad , Recurrencia Local de Neoplasia/diagnóstico por imagen , Tomografía Computarizada por Tomografía de Emisión de Positrones , Pronóstico , Calidad de Vida , Estudios Retrospectivos
6.
Radiol Artif Intell ; 1(2): 180014, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33937787

RESUMEN

PURPOSE: To evaluate the performance, agreement, and efficiency of a fully convolutional network (FCN) for liver lesion detection and segmentation at CT examinations in patients with colorectal liver metastases (CLMs). MATERIALS AND METHODS: This retrospective study evaluated an automated method using an FCN that was trained, validated, and tested with 115, 15, and 26 contrast material-enhanced CT examinations containing 261, 22, and 105 lesions, respectively. Manual detection and segmentation by a radiologist was the reference standard. Performance of fully automated and user-corrected segmentations was compared with that of manual segmentations. The interuser agreement and interaction time of manual and user-corrected segmentations were assessed. Analyses included sensitivity and positive predictive value of detection, segmentation accuracy, Cohen κ, Bland-Altman analyses, and analysis of variance. RESULTS: In the test cohort, for lesion size smaller than 10 mm (n = 30), 10-20 mm (n = 35), and larger than 20 mm (n = 40), the detection sensitivity of the automated method was 10%, 71%, and 85%; positive predictive value was 25%, 83%, and 94%; Dice similarity coefficient was 0.14, 0.53, and 0.68; maximum symmetric surface distance was 5.2, 6.0, and 10.4 mm; and average symmetric surface distance was 2.7, 1.7, and 2.8 mm, respectively. For manual and user-corrected segmentation, κ values were 0.42 (95% confidence interval: 0.24, 0.63) and 0.52 (95% confidence interval: 0.36, 0.72); normalized interreader agreement for lesion volume was -0.10 ± 0.07 (95% confidence interval) and -0.10 ± 0.08; and mean interaction time was 7.7 minutes ± 2.4 (standard deviation) and 4.8 minutes ± 2.1 (P < .001), respectively. CONCLUSION: Automated detection and segmentation of CLM by using deep learning with convolutional neural networks, when manually corrected, improved efficiency but did not substantially change agreement on volumetric measurements.© RSNA, 2019Supplemental material is available for this article.

7.
Med Image Anal ; 44: 1-13, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29169029

RESUMEN

In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Algoritmos , Humanos , Imagenología Tridimensional , Neoplasias Hepáticas/diagnóstico por imagen , Vértebras Lumbares/diagnóstico por imagen , Imagen por Resonancia Magnética , Masculino , Enfermedades de la Próstata/diagnóstico por imagen , Tomografía Computarizada por Rayos X
8.
Radiographics ; 37(7): 2113-2131, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29131760

RESUMEN

Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. ©RSNA, 2017.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje , Redes Neurales de la Computación , Sistemas de Información Radiológica , Radiología/educación , Algoritmos , Humanos , Aprendizaje Automático
9.
Med Biol Eng Comput ; 55(1): 127-139, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27106756

RESUMEN

The segmentation of liver tumours in CT images is useful for the diagnosis and treatment of liver cancer. Furthermore, an accurate assessment of tumour volume aids in the diagnosis and evaluation of treatment response. Currently, segmentation is performed manually by an expert, and because of the time required, a rough estimate of tumour volume is often done instead. We propose a semi-automatic segmentation method that makes use of machine learning within a deformable surface model. Specifically, we propose a deformable model that uses a voxel classifier based on a multilayer perceptron (MLP) to interpret the CT image. The new deformable model considers vertex displacement towards apparent tumour boundaries and regularization that promotes surface smoothness. During operation, a user identifies the target tumour and the mesh then automatically delineates the tumour from the MLP processed image. The method was tested on a dataset of 40 abdominal CT scans with a total of 95 colorectal metastases collected from a variety of scanners with variable spatial resolution. The segmentation results are encouraging with a Dice similarity metric of [Formula: see text] and demonstrates that the proposed method can deal with highly variable data. This work motivates further research into tumour segmentation using machine learning with more data and deeper neural networks.


Asunto(s)
Neoplasias Colorrectales/secundario , Imagenología Tridimensional , Neoplasias Hepáticas/patología , Modelos Biológicos , Redes Neurales de la Computación , Neoplasias Colorrectales/diagnóstico por imagen , Bases de Datos como Asunto , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tomografía Computarizada por Rayos X
10.
Phys Med Biol ; 60(16): 6459-78, 2015 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-26247117

RESUMEN

The early detection, diagnosis and monitoring of liver cancer progression can be achieved with the precise delineation of metastatic tumours. However, accurate automated segmentation remains challenging due to the presence of noise, inhomogeneity and the high appearance variability of malignant tissue. In this paper, we propose an unsupervised metastatic liver tumour segmentation framework using a machine learning approach based on discriminant Grassmannian manifolds which learns the appearance of tumours with respect to normal tissue. First, the framework learns within-class and between-class similarity distributions from a training set of images to discover the optimal manifold discrimination between normal and pathological tissue in the liver. Second, a conditional optimisation scheme computes non-local pairwise as well as pattern-based clique potentials from the manifold subspace to recognise regions with similar labelings and to incorporate global consistency in the segmentation process. The proposed framework was validated on a clinical database of 43 CT images from patients with metastatic liver cancer. Compared to state-of-the-art methods, our method achieves a better performance on two separate datasets of metastatic liver tumours from different clinical sites, yielding an overall mean Dice similarity coefficient of [Formula: see text] in over 50 tumours with an average volume of 27.3 mm(3).


Asunto(s)
Algoritmos , Tomografía Computarizada Cuatridimensional/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Humanos , Neoplasias Hepáticas/secundario , Aprendizaje Automático
11.
J Biol Chem ; 287(24): 19997-20006, 2012 Jun 08.
Artículo en Inglés | MEDLINE | ID: mdl-22523080

RESUMEN

FGF21 stimulates FGFR1c activity in cells that co-express Klothoß (KLB); however, relatively little is known about the interaction of these receptors at the plasma membrane. We measured the dynamics and distribution of fluorescent protein-tagged KLB and FGFR1c in living cells using fluorescence recovery after photobleaching and number and brightness analysis. We confirmed that fluorescent protein-tagged KLB translocates to the plasma membrane and is active when co-expressed with FGFR1c. FGF21-induced signaling was enhanced in cells treated with lactose, a competitive inhibitor of the galectin lattice, suggesting that lattice-binding modulates KLB and/or FGFR1c activity. Fluorescence recovery after photobleaching analysis consistently revealed that lactose treatment increased KLB mobility at the plasma membrane, but did not affect the mobility of FGFR1c. The association of endogenous KLB with the galectin lattice was also confirmed by co-immunoprecipitation with galectin-3. KLB mobility increased when co-expressed with FGFR1c, suggesting that the two receptors form a heterocomplex independent of the galectin lattice. Number and brightness analysis revealed that KLB and FGFR1c behave as monomers and dimers at the plasma membrane, respectively. Co-expression resulted in monomeric expression of KLB and FGFR1c consistent with formation of a 1:1 heterocomplex. Subsequent addition of FGF21 induced FGFR1 dimerization without changing KLB aggregate size, suggesting formation of a 1:2 KLB-FGFR1c signaling complex. Overall, these data suggest that KLB and FGFR1 form a 1:1 heterocomplex independent of the galectin lattice that transitions to a 1:2 complex upon the addition of FGF21.


Asunto(s)
Membrana Celular/metabolismo , Factores de Crecimiento de Fibroblastos/metabolismo , Proteínas de la Membrana/metabolismo , Complejos Multiproteicos/metabolismo , Receptor Tipo 1 de Factor de Crecimiento de Fibroblastos/metabolismo , Transducción de Señal/fisiología , Animales , Membrana Celular/genética , Factores de Crecimiento de Fibroblastos/genética , Galectina 3/genética , Galectina 3/metabolismo , Células HEK293 , Células HeLa , Humanos , Proteínas Klotho , Proteínas de la Membrana/genética , Ratones , Complejos Multiproteicos/genética , Multimerización de Proteína/fisiología , Transporte de Proteínas/fisiología , Receptor Tipo 1 de Factor de Crecimiento de Fibroblastos/genética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...