Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Hepatol Commun ; 6(10): 2901-2913, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35852311

RESUMEN

Hepatocellular carcinoma (HCC) can be potentially discovered from abdominal computed tomography (CT) studies under varied clinical scenarios (e.g., fully dynamic contrast-enhanced [DCE] studies, noncontrast [NC] plus venous phase [VP] abdominal studies, or NC-only studies). Each scenario presents its own clinical challenges that could benefit from computer-aided detection (CADe) tools. We investigate whether a single CADe model can be made flexible enough to handle different contrast protocols and whether this flexibility imparts performance gains. We developed a flexible three-dimensional deep algorithm, called heterophase volumetric detection (HPVD), that can accept any combination of contrast-phase inputs with adjustable sensitivity depending on the clinical purpose. We trained HPVD on 771 DCE CT scans to detect HCCs and evaluated it on 164 positives and 206 controls. We compared performance against six clinical readers, including two radiologists, two hepatopancreaticobiliary surgeons, and two hepatologists. The area under the curve of the localization receiver operating characteristic for NC-only, NC plus VP, and full DCE CT yielded 0.71 (95% confidence interval [CI], 0.64-0.77), 0.81 (95% CI, 0.75-0.87), and 0.89 (95% CI, 0.84-0.93), respectively. At a high-sensitivity operating point of 80% on DCE CT, HPVD achieved 97% specificity, which is comparable to measured physician performance. We also demonstrated performance improvements over more typical and less flexible nonheterophase detectors. Conclusion: A single deep-learning algorithm can be effectively applied to diverse HCC detection clinical scenarios, indicating that HPVD could serve as a useful clinical aid for at-risk and opportunistic HCC surveillance.


Asunto(s)
Carcinoma Hepatocelular , Neoplasias Hepáticas , Algoritmos , Carcinoma Hepatocelular/diagnóstico , Medios de Contraste , Humanos , Neoplasias Hepáticas/diagnóstico , Tomografía Computarizada por Rayos X/métodos
2.
IEEE Trans Med Imaging ; 41(10): 2658-2669, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35442886

RESUMEN

Radiological images such as computed tomography (CT) and X-rays render anatomy with intrinsic structures. Being able to reliably locate the same anatomical structure across varying images is a fundamental task in medical image analysis. In principle it is possible to use landmark detection or semantic segmentation for this task, but to work well these require large numbers of labeled data for each anatomical structure and sub-structure of interest. A more universal approach would learn the intrinsic structure from unlabeled images. We introduce such an approach, called Self-supervised Anatomical eMbedding (SAM). SAM generates semantic embeddings for each image pixel that describes its anatomical location or body part. To produce such embeddings, we propose a pixel-level contrastive learning framework. A coarse-to-fine strategy ensures both global and local anatomical information are encoded. Negative sample selection strategies are designed to enhance the embedding's discriminability. Using SAM, one can label any point of interest on a template image and then locate the same body part in other images by simple nearest neighbor searching. We demonstrate the effectiveness of SAM in multiple tasks with 2D and 3D image modalities. On a chest CT dataset with 19 landmarks, SAM outperforms widely-used registration algorithms while only taking 0.23 seconds for inference. On two X-ray datasets, SAM, with only one labeled template image, surpasses supervised methods trained on 50 labeled images. We also apply SAM on whole-body follow-up lesion matching in CT and obtain an accuracy of 91%. SAM can also be applied for improving image registration and initializing CNN weights.


Asunto(s)
Imagenología Tridimensional , Tomografía Computarizada por Rayos X , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Radiografía , Aprendizaje Automático Supervisado , Tomografía Computarizada por Rayos X/métodos
3.
Clin Cancer Res ; 27(14): 3948-3959, 2021 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-33947697

RESUMEN

PURPOSE: Accurate prognostic stratification of patients with oropharyngeal squamous cell carcinoma (OPSCC) is crucial. We developed an objective and robust deep learning-based fully-automated tool called the DeepPET-OPSCC biomarker for predicting overall survival (OS) in OPSCC using [18F]fluorodeoxyglucose (FDG)-PET imaging. EXPERIMENTAL DESIGN: The DeepPET-OPSCC prediction model was built and tested internally on a discovery cohort (n = 268) by integrating five convolutional neural network models for volumetric segmentation and ten models for OS prognostication. Two external test cohorts were enrolled-the first based on the Cancer Imaging Archive (TCIA) database (n = 353) and the second being a clinical deployment cohort (n = 31)-to assess the DeepPET-OPSCC performance and goodness of fit. RESULTS: After adjustment for potential confounders, DeepPET-OPSCC was found to be an independent predictor of OS in both discovery and TCIA test cohorts [HR = 2.07; 95% confidence interval (CI), 1.31-3.28 and HR = 2.39; 95% CI, 1.38-4.16; both P = 0.002]. The tool also revealed good predictive performance, with a c-index of 0.707 (95% CI, 0.658-0.757) in the discovery cohort, 0.689 (95% CI, 0.621-0.757) in the TCIA test cohort, and 0.787 (95% CI, 0.675-0.899) in the clinical deployment test cohort; the average time taken was 2 minutes for calculation per exam. The integrated nomogram of DeepPET-OPSCC and clinical risk factors significantly outperformed the clinical model [AUC at 5 years: 0.801 (95% CI, 0.727-0.874) vs. 0.749 (95% CI, 0.649-0.842); P = 0.031] in the TCIA test cohort. CONCLUSIONS: DeepPET-OPSCC achieved an accurate OS prediction in patients with OPSCC and enabled an objective, unbiased, and rapid assessment for OPSCC prognostication.


Asunto(s)
Aprendizaje Profundo , Fluorodesoxiglucosa F18 , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/mortalidad , Neoplasias Orofaríngeas/diagnóstico por imagen , Neoplasias Orofaríngeas/mortalidad , Tomografía de Emisión de Positrones , Radiofármacos , Carcinoma de Células Escamosas de Cabeza y Cuello/diagnóstico por imagen , Carcinoma de Células Escamosas de Cabeza y Cuello/mortalidad , Estudios de Cohortes , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Pronóstico , Tasa de Supervivencia
4.
IEEE Trans Med Imaging ; 40(10): 2759-2770, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-33370236

RESUMEN

Large-scale datasets with high-quality labels are desired for training accurate deep learning models. However, due to the annotation cost, datasets in medical imaging are often either partially-labeled or small. For example, DeepLesion is such a large-scale CT image dataset with lesions of various types, but it also has many unlabeled lesions (missing annotations). When training a lesion detector on a partially-labeled dataset, the missing annotations will generate incorrect negative signals and degrade the performance. Besides DeepLesion, there are several small single-type datasets, such as LUNA for lung nodules and LiTS for liver tumors. These datasets have heterogeneous label scopes, i.e., different lesion types are labeled in different datasets with other types ignored. In this work, we aim to develop a universal lesion detection algorithm to detect a variety of lesions. The problem of heterogeneous and partial labels is tackled. First, we build a simple yet effective lesion detection framework named Lesion ENSemble (LENS). LENS can efficiently learn from multiple heterogeneous lesion datasets in a multi-task fashion and leverage their synergy by proposal fusion. Next, we propose strategies to mine missing annotations from partially-labeled datasets by exploiting clinical prior knowledge and cross-dataset knowledge transfer. Finally, we train our framework on four public lesion datasets and evaluate it on 800 manually-labeled sub-volumes in DeepLesion. Our method brings a relative improvement of 49% compared to the current state-of-the-art approach in the metric of average sensitivity. We have publicly released our manual 3D annotations of DeepLesion online.1 1https://github.com/viggin/DeepLesion_manual_test_set.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Radiografía
5.
IEEE Trans Med Imaging ; 40(1): 59-70, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32894709

RESUMEN

The acquisition of large-scale medical image data, necessary for training machine learning algorithms, is hampered by associated expert-driven annotation costs. Mining hospital archives can address this problem, but labels often incomplete or noisy, e.g., 50% of the lesions in DeepLesion are left unlabeled. Thus, effective label harvesting methods are critical. This is the goal of our work, where we introduce Lesion-Harvester-a powerful system to harvest missing annotations from lesion datasets at high precision. Accepting the need for some degree of expert labor, we use a small fully-labeled image subset to intelligently mine annotations from the remainder. To do this, we chain together a highly sensitive lesion proposal generator (LPG) and a very selective lesion proposal classifier (LPC). Using a new hard negative suppression loss, the resulting harvested and hard-negative proposals are then employed to iteratively finetune our LPG. While our framework is generic, we optimize our performance by proposing a new 3D contextual LPG and by using a global-local multi-view LPC. Experiments on DeepLesion demonstrate that Lesion-Harvester can discover an additional 9,805 lesions at a precision of 90%. We publicly release the harvested lesions, along with a new test set of completely annotated DeepLesion volumes. We also present a pseudo 3D IoU evaluation metric that corresponds much better to the real 3D IoU than current DeepLesion evaluation metrics. To quantify the downstream benefits of Lesion-Harvester we show that augmenting the DeepLesion annotations with our harvested lesions allows state-of-the-art detectors to boost their average precision by 7 to 10%.


Asunto(s)
Algoritmos , Aprendizaje Automático
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1637-1640, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-33018309

RESUMEN

Karyotyping, consisting of single chromosome segmentation and classification, is widely used in the cytogenetic analysis for chromosome abnormality detection. Many studies have reported automatic chromosome classification with high accuracy. Nevertheless, they usually require manual chromosome segmentation beforehand. There are two critical issues in automatic chromosome segmentation: 1) scarce annotated images for model training, and 2) multiple region combinations to form single chromosomes. In this study, two simulation strategies are proposed for training data argumentation to alleviate data scarcity. Besides, we present an optimization-based shape learning method to evaluate the shape of formed single chromosomes, which achieve the global minimum loss when segmented regions are correctly combined. Experiments on a public dataset demonstrate the effectiveness of the proposed method. The data simulation strategy has significantly increased the segmentation results by 15.8% and 46.3% of the Dice coefficient on non-overlapped and overlapped regions. Moreover, the proposed optimization-based method separates overlapped chromosomes with an accuracy of 96.2%.


Asunto(s)
Algoritmos , Cromosomas , Cariotipificación
7.
Med Image Anal ; 65: 101766, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32623276

RESUMEN

Although having achieved great success in medical image segmentation, deep learning-based approaches usually require large amounts of well-annotated data, which can be extremely expensive in the field of medical image analysis. Unlabeled data, on the other hand, is much easier to acquire. Semi-supervised learning and unsupervised domain adaptation both take the advantage of unlabeled data, and they are closely related to each other. In this paper, we propose uncertainty-aware multi-view co-training (UMCT), a unified framework that addresses these two tasks for volumetric medical image segmentation. Our framework is capable of efficiently utilizing unlabeled data for better performance. We firstly rotate and permute the 3D volumes into multiple views and train a 3D deep network on each view. We then apply co-training by enforcing multi-view consistency on unlabeled data, where an uncertainty estimation of each view is utilized to achieve accurate labeling. Experiments on the NIH pancreas segmentation dataset and a multi-organ segmentation dataset show state-of-the-art performance of the proposed framework on semi-supervised medical image segmentation. Under unsupervised domain adaptation settings, we validate the effectiveness of this work by adapting our multi-organ segmentation model to two pathological organs from the Medical Segmentation Decathlon Datasets. Additionally, we show that our UMCT-DA model can even effectively handle the challenging situation where labeled source data is inaccessible, demonstrating strong potentials for real-world applications.


Asunto(s)
Aprendizaje Automático Supervisado , Humanos , Incertidumbre
8.
Pattern Recognit ; 86: 368-375, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-31105339

RESUMEN

The muscular dystrophies are made up of a diverse group of rare genetic diseases characterized by progressive loss of muscle strength and muscle damage. Since there is no cure for muscular dystrophy and clinical outcome measures are limited, it is critical to assess the progression of MD objectively. Imaging muscle replacement by fibrofatty tissue has been shown to be a robust biomarker to monitor disease progression in DMD. In magnetic resonance imaging (MRI) data, specific texture patterns are found to correlate to certain MD subtypes and thus present a potential way for automatic assessment. In this paper, we first apply state-of-the-art convolutional neural networks (CNNs) to perform accurate MD image classification and then propose an effective visualization method to highlight the important image textures. With a dystrophic MRI dataset, we found that the best CNN model delivers an 91.7% classification accuracy, which significantly outperforms non-deep learning methods, e.g., >40% improvement has been found over the traditional mean fat fraction (MFF) criterion for DMD and CMD classification. After investigating every single neuron at the top layer of CNN model, we found the superior classification ability of CNN can be explained by its 91 and 118 neurons were performing better than the MFF criterion under the measurements of Euclidean and Chi-square distance, respectively. In order to further interpret CNNs predictions, we tested an improved class activation mapping (ICAM) method to visualize the important regions in the MRI images. With this ICAM, CNNs are able to locate the most discriminative texture patterns of DMD in soleus, lateral gastrocnemius, and medial gastrocnemius; for CMD, the critical texture patterns are highlighted in soleus, tibialis posterior, and peroneus.

9.
Med Image Anal ; 52: 174-184, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30594770

RESUMEN

Synthesized medical images have several important applications. For instance, they can be used as an intermedium in cross-modality image registration or used as augmented training samples to boost the generalization capability of a classifier. In this work, we propose a generic cross-modality synthesis approach with the following targets: 1) synthesizing realistic looking 2D/3D images without needing paired training data, 2) ensuring consistent anatomical structures, which could be changed by geometric distortion in cross-modality synthesis and 3) more importantly, improving volume segmentation by using synthetic data for modalities with limited training samples. We show that these goals can be achieved with an end-to-end 2D/3D convolutional neural network (CNN) composed of mutually-beneficial generators and segmentors for image synthesis and segmentation tasks. The generators are trained with an adversarial loss, a cycle-consistency loss, and also a shape-consistency loss (supervised by segmentors) to reduce the geometric distortion. From the segmentation view, the segmentors are boosted by synthetic data from generators in an online manner. Generators and segmentors prompt each other alternatively in an end-to-end training fashion. We validate our proposed method on three datasets, including cardiovascular CT and magnetic resonance imaging (MRI), abdominal CT and MRI, and mammography X-rays from different data domains, showing both tasks are beneficial to each other and coupling these two tasks results in better performance than solving them exclusively.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética , Mamografía , Tomografía Computarizada por Rayos X
10.
Med Image Comput Comput Assist Interv ; 9902: 183-190, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27924318

RESUMEN

In order to deal with ambiguous image appearances in cell segmentation, high-level shape modeling has been introduced to delineate cell boundaries. However, shape modeling usually requires sufficient annotated training shapes, which are often labor intensive or unavailable. Meanwhile, when applying the model to different datasets, it is necessary to repeat the tedious annotation process to generate enough training data, and this will significantly limit the applicability of the model. In this paper, we propose to transfer shape modeling learned from an existing but different dataset (e.g. lung cancer) to assist cell segmentation in a new target dataset (e.g. skeletal muscle) without expensive manual annotations. Considering the intrinsic geometry structure of cell shapes, we incorporate the shape transfer model into a sparse representation framework with a manifold embedding constraint, and provide an efficient algorithm to solve the optimization problem. The proposed algorithm is tested on multiple microscopy image datasets with different tissue and staining preparations, and the experiments demonstrate its effectiveness.


Asunto(s)
Forma de la Célula , Microscopía/métodos , Músculo Esquelético/citología , Algoritmos , Conjuntos de Datos como Asunto , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Aprendizaje Automático , Músculo Esquelético/diagnóstico por imagen , Tumores Neuroendocrinos/diagnóstico por imagen , Tumores Neuroendocrinos/patología , Neoplasias Pancreáticas/diagnóstico por imagen , Neoplasias Pancreáticas/patología , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
11.
Med Image Comput Comput Assist Interv ; 9901: 442-450, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28083570

RESUMEN

Automated pancreas segmentation in medical images is a prerequisite for many clinical applications, such as diabetes inspection, pancreatic cancer diagnosis, and surgical planing. In this paper, we formulate pancreas segmentation in magnetic resonance imaging (MRI) scans as a graph based decision fusion process combined with deep convolutional neural networks (CNN). Our approach conducts pancreatic detection and boundary segmentation with two types of CNN models respectively: 1) the tissue detection step to differentiate pancreas and non-pancreas tissue with spatial intensity context; 2) the boundary detection step to allocate the semantic boundaries of pancreas. Both detection results of the two networks are fused together as the initialization of a conditional random field (CRF) framework to obtain the final segmentation output. Our approach achieves the mean dice similarity coefficient (DSC) 76.1% with the standard deviation of 8.7% in a dataset containing 78 abdominal MRI scans. The proposed algorithm achieves the best results compared with other state of the arts.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación , Páncreas/diagnóstico por imagen , Humanos , Páncreas/anatomía & histología , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...