Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
PLoS Comput Biol ; 19(9): e1011432, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37733781

RESUMO

Multiplex imaging is a powerful tool to analyze the structural and functional states of cells in their morphological and pathological contexts. However, hypothesis testing with multiplex imaging data is a challenging task due to the extent and complexity of the information obtained. Various computational pipelines have been developed and validated to extract knowledge from specific imaging platforms. A common problem with customized pipelines is their reduced applicability across different imaging platforms: Every multiplex imaging technique exhibits platform-specific characteristics in terms of signal-to-noise ratio and acquisition artifacts that need to be accounted for to yield reliable and reproducible results. We propose a pixel classifier-based image preprocessing step that aims to minimize platform-dependency for all multiplex image analysis pipelines. Signal detection and noise reduction as well as artifact removal can be posed as a pixel classification problem in which all pixels in multiplex images can be assigned to two general classes of either I) signal of interest or II) artifacts and noise. The resulting feature representation maps contain pixel-scale representations of the input data, but exhibit significantly increased signal-to-noise ratios with normalized pixel values as output data. We demonstrate the validity of our proposed image preprocessing approach by comparing the results of two well-accepted and widely-used image analysis pipelines.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Artefatos , Razão Sinal-Ruído , Algoritmos
2.
BMC Bioinformatics ; 20(1): 509, 2019 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-31640559

RESUMO

Following publication of the original article [1], we have been notified of a few errors in the html version.

3.
BMC Bioinformatics ; 20(1): 472, 2019 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-31521104

RESUMO

BACKGROUND: Nucleus is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. RESULTS: We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. CONCLUSIONS: We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Microscopia/métodos , Redes Neurais de Computação , Humanos
4.
Pattern Recognit ; 86: 368-375, 2019 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31105339

RESUMO

The muscular dystrophies are made up of a diverse group of rare genetic diseases characterized by progressive loss of muscle strength and muscle damage. Since there is no cure for muscular dystrophy and clinical outcome measures are limited, it is critical to assess the progression of MD objectively. Imaging muscle replacement by fibrofatty tissue has been shown to be a robust biomarker to monitor disease progression in DMD. In magnetic resonance imaging (MRI) data, specific texture patterns are found to correlate to certain MD subtypes and thus present a potential way for automatic assessment. In this paper, we first apply state-of-the-art convolutional neural networks (CNNs) to perform accurate MD image classification and then propose an effective visualization method to highlight the important image textures. With a dystrophic MRI dataset, we found that the best CNN model delivers an 91.7% classification accuracy, which significantly outperforms non-deep learning methods, e.g., >40% improvement has been found over the traditional mean fat fraction (MFF) criterion for DMD and CMD classification. After investigating every single neuron at the top layer of CNN model, we found the superior classification ability of CNN can be explained by its 91 and 118 neurons were performing better than the MFF criterion under the measurements of Euclidean and Chi-square distance, respectively. In order to further interpret CNNs predictions, we tested an improved class activation mapping (ICAM) method to visualize the important regions in the MRI images. With this ICAM, CNNs are able to locate the most discriminative texture patterns of DMD in soleus, lateral gastrocnemius, and medial gastrocnemius; for CMD, the critical texture patterns are highlighted in soleus, tibialis posterior, and peroneus.

5.
Bioinformatics ; 30(7): 996-1002, 2014 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-24215030

RESUMO

MOTIVATION: The capacity to systematically search through large image collections and ensembles and detect regions exhibiting similar morphological characteristics is central to pathology diagnosis. Unfortunately, the primary methods used to search digitized, whole-slide histopathology specimens are slow and prone to inter- and intra-observer variability. The central objective of this research was to design, develop, and evaluate a content-based image retrieval system to assist doctors for quick and reliable content-based comparative search of similar prostate image patches. METHOD: Given a representative image patch (sub-image), the algorithm will return a ranked ensemble of image patches throughout the entire whole-slide histology section which exhibits the most similar morphologic characteristics. This is accomplished by first performing hierarchical searching based on a newly developed hierarchical annular histogram (HAH). The set of candidates is then further refined in the second stage of processing by computing a color histogram from eight equally divided segments within each square annular bin defined in the original HAH. A demand-driven master-worker parallelization approach is employed to speed up the searching procedure. Using this strategy, the query patch is broadcasted to all worker processes. Each worker process is dynamically assigned an image by the master process to search for and return a ranked list of similar patches in the image. RESULTS: The algorithm was tested using digitized hematoxylin and eosin (H&E) stained prostate cancer specimens. We have achieved an excellent image retrieval performance. The recall rate within the first 40 rank retrieved image patches is ∼90%. AVAILABILITY AND IMPLEMENTATION: Both the testing data and source code can be downloaded from http://pleiad.umdnj.edu/CBII/Bioinformatics/.


Assuntos
Algoritmos , Análise por Conglomerados , Cor , Processamento de Imagem Assistida por Computador
6.
BMC Bioinformatics ; 15: 310, 2014 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-25240495

RESUMO

BACKGROUND: Non-small cell lung cancer (NSCLC), the most common type of lung cancer, is one of serious diseases causing death for both men and women. Computer-aided diagnosis and survival prediction of NSCLC, is of great importance in providing assistance to diagnosis and personalize therapy planning for lung cancer patients. RESULTS: In this paper we have proposed an integrated framework for NSCLC computer-aided diagnosis and survival analysis using novel image markers. The entire biomedical imaging informatics framework consists of cell detection, segmentation, classification, discovery of image markers, and survival analysis. A robust seed detection-guided cell segmentation algorithm is proposed to accurately segment each individual cell in digital images. Based on cell segmentation results, a set of extensive cellular morphological features are extracted using efficient feature descriptors. Next, eight different classification techniques that can handle high-dimensional data have been evaluated and then compared for computer-aided diagnosis. The results show that the random forest and adaboost offer the best classification performance for NSCLC. Finally, a Cox proportional hazards model is fitted by component-wise likelihood based boosting. Significant image markers have been discovered using the bootstrap analysis and the survival prediction performance of the model is also evaluated. CONCLUSIONS: The proposed model have been applied to a lung cancer dataset that contains 122 cases with complete clinical information. The classification performance exhibits high correlations between the discovered image markers and the subtypes of NSCLC. The survival analysis demonstrates strong prediction power of the statistical model built from the discovered image markers.


Assuntos
Carcinoma Pulmonar de Células não Pequenas/diagnóstico , Carcinoma Pulmonar de Células não Pequenas/patologia , Diagnóstico por Computador/métodos , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/patologia , Adenocarcinoma/diagnóstico , Adenocarcinoma/patologia , Adenocarcinoma de Pulmão , Algoritmos , Carcinoma de Células Escamosas/diagnóstico , Carcinoma de Células Escamosas/patologia , Feminino , Humanos , Funções Verossimilhança , Masculino , Modelos Estatísticos , Análise de Sobrevida
7.
BMC Bioinformatics ; 15: 287, 2014 Aug 26.
Artigo em Inglês | MEDLINE | ID: mdl-25155691

RESUMO

BACKGROUND: The development of digital imaging technology is creating extraordinary levels of accuracy that provide support for improved reliability in different aspects of the image analysis, such as content-based image retrieval, image segmentation, and classification. This has dramatically increased the volume and rate at which data are generated. Together these facts make querying and sharing non-trivial and render centralized solutions unfeasible. Moreover, in many cases this data is often distributed and must be shared across multiple institutions requiring decentralized solutions. In this context, a new generation of data/information driven applications must be developed to take advantage of the national advanced cyber-infrastructure (ACI) which enable investigators to seamlessly and securely interact with information/data which is distributed across geographically disparate resources. This paper presents the development and evaluation of a novel content-based image retrieval (CBIR) framework. The methods were tested extensively using both peripheral blood smears and renal glomeruli specimens. The datasets and performance were evaluated by two pathologists to determine the concordance. RESULTS: The CBIR algorithms that were developed can reliably retrieve the candidate image patches exhibiting intensity and morphological characteristics that are most similar to a given query image. The methods described in this paper are able to reliably discriminate among subtle staining differences and spatial pattern distributions. By integrating a newly developed dual-similarity relevance feedback module into the CBIR framework, the CBIR results were improved substantially. By aggregating the computational power of high performance computing (HPC) and cloud resources, we demonstrated that the method can be successfully executed in minutes on the Cloud compared to weeks using standard computers. CONCLUSIONS: In this paper, we present a set of newly developed CBIR algorithms and validate them using two different pathology applications, which are regularly evaluated in the practice of pathology. Comparative experimental results demonstrate excellent performance throughout the course of a set of systematic studies. Additionally, we present and evaluate a framework to enable the execution of these algorithms across distributed resources. We show how parallel searching of content-wise similar images in the dataset significantly reduces the overall computational time to ensure the practical utility of the proposed CBIR algorithms.


Assuntos
Algoritmos , Diagnóstico por Imagem , Armazenamento e Recuperação da Informação/métodos , Patologia , Retroalimentação , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes
8.
IEEE Trans Biomed Eng ; 71(1): 247-257, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37471190

RESUMO

OBJECTIVE: Lesion detection with positron emission tomography (PET) imaging is critical for tumor staging, treatment planning, and advancing novel therapies to improve patient outcomes, especially for neuroendocrine tumors (NETs). Current lesion detection methods often require manual cropping of regions/volumes of interest (ROIs/VOIs) a priori, or rely on multi-stage, cascaded models, or use multi-modality imaging to detect lesions in PET images. This leads to significant inefficiency, high variability and/or potential accumulative errors in lesion quantification. To tackle this issue, we propose a novel single-stage lesion detection method using only PET images. METHODS: We design and incorporate a new, plug-and-play codebook learning module into a U-Net-like neural network and promote lesion location-specific feature learning at multiple scales. We explicitly regularize the codebook learning with direct supervision at the network's multi-level hidden layers and enforce the network to learn multi-scale discriminative features with respect to predicting lesion positions. The network automatically combines the predictions from the codebook learning module and other layers via a learnable fusion layer. RESULTS: We evaluate the proposed method on a real-world clinical 68Ga-DOTATATE PET image dataset, and our method produces significantly better lesion detection performance than recent state-of-the-art approaches. CONCLUSION: We present a novel deep learning method for single-stage lesion detection in PET imaging data, with no ROI/VOI cropping in advance, no multi-stage modeling and no multi-modality data. SIGNIFICANCE: This study provides a new perspective for effective and efficient lesion identification in PET, potentially accelerating novel therapeutic regimen development for NETs and ultimately improving patient outcomes including survival.


Assuntos
Tumores Neuroendócrinos , Compostos Organometálicos , Humanos , Radioisótopos de Gálio , Tomografia por Emissão de Pósitrons/métodos , Tumores Neuroendócrinos/patologia
9.
IEEE Trans Biomed Eng ; 71(2): 679-688, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37708016

RESUMO

OBJECTIVE: Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS: We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS: The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION: With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE: This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.


Assuntos
Curadoria de Dados , Tumores Neuroendócrinos , Humanos , Tomografia por Emissão de Pósitrons/métodos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
10.
Bioengineering (Basel) ; 11(3)2024 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-38534501

RESUMO

Deep learning (DL) algorithms used for DOTATATE PET lesion detection typically require large, well-annotated training datasets. These are difficult to obtain due to low incidence of gastroenteropancreatic neuroendocrine tumors (GEP-NETs) and the high cost of manual annotation. Furthermore, networks trained and tested with data acquired from site specific PET/CT instrumentation, acquisition and processing protocols have reduced performance when tested with offsite data. This lack of generalizability requires even larger, more diverse training datasets. The objective of this study is to investigate the feasibility of improving DL algorithm performance by better matching the background noise in training datasets to higher noise, out-of-domain testing datasets. 68Ga-DOTATATE PET/CT datasets were obtained from two scanners: Scanner1, a state-of-the-art digital PET/CT (GE DMI PET/CT; n = 83 subjects), and Scanner2, an older-generation analog PET/CT (GE STE; n = 123 subjects). Set1, the data set from Scanner1, was reconstructed with standard clinical parameters (5 min; Q.Clear) and list-mode reconstructions (VPFXS 2, 3, 4, and 5-min). Set2, data from Scanner2 representing out-of-domain clinical scans, used standard iterative reconstruction (5 min; OSEM). A deep neural network was trained with each dataset: Network1 for Scanner1 and Network2 for Scanner2. DL performance (Network1) was tested with out-of-domain test data (Set2). To evaluate the effect of training sample size, we tested DL model performance using a fraction (25%, 50% and 75%) of Set1 for training. Scanner1, list-mode 2-min reconstructed data demonstrated the most similar noise level compared that of Set2, resulting in the best performance (F1 = 0.713). This was not significantly different compared to the highest performance, upper-bound limit using in-domain training for Network2 (F1 = 0.755; p-value = 0.103). Regarding sample size, the F1 score significantly increased from 25% training data (F1 = 0.478) to 100% training data (F1 = 0.713; p < 0.001). List-mode data from modern PET scanners can be reconstructed to better match the noise properties of older scanners. Using existing data and their associated annotations dramatically reduces the cost and effort in generating these datasets and significantly improves the performance of existing DL algorithms. List-mode reconstructions can provide an efficient, low-cost method to improve DL algorithm generalizability.

11.
Med Image Anal ; 90: 102969, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37802010

RESUMO

Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.


Assuntos
Técnicas Histológicas , Aprendizagem , Humanos , Microscopia , Redes Neurais de Computação , Coloração e Rotulagem , Processamento de Imagem Assistida por Computador
12.
Am J Nucl Med Mol Imaging ; 13(1): 33-42, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36923602

RESUMO

BACKGROUND: Deep learning (DL) algorithms have shown promise in identifying and quantifying lesions in PET/CT. However, the accuracy and generalizability of these algorithms relies on large, diverse datasets which are time and labor intensive to curate. Modern PET/CT scanners may acquire data in list mode, allowing for multiple reconstructions of the same datasets with different parameters and imaging times. These reconstructions may provide a wide range of image characteristics to increase the size and diversity of datasets. Training algorithms with shorter imaging times and higher noise properties requires that lesions remain detectable. The purpose of this study is to model and predict the contrast-to-noise ratio (CNR) for shorter imaging times based on CNR from longer duration, lower noise images for 68Ga DOTATATE PET hepatic lesions and identify a threshold above which lesions remain detectable. METHODS: 68Ga DOTATATE subjects (n=20) with hepatic lesions were divided into two subgroups. The "Model" group (n=4 subjects; n=9 lesions; n=36 datapoints) was used to identify the relationship between CNR and imaging time. The "Test" group (n=16 subjects; n=44 lesions; n=176 datapoints) was used to evaluate the prediction provided by the model. RESULTS: CNR plotted as a function of imaging time for a subset of identified subjects was very well fit with a quadratic model. For the remaining subjects, the measured CNR showed a very high linear correlation with the predicted CNR for these lesions (R2 > 0.97) for all imaging durations. From the model, a threshold CNR=6.9 at 5-minutes predicted CNR > 5 at 2-minutes. Visual inspection of lesions in 2-minute images with CNR above the threshold in 5-minute images were assessed and rated as a 4 or 5 (probably positive or definitely positive) confirming 100% lesion detectability on the shorter 2-minute PET images. CONCLUSIONS: CNR for shorter DOTATATE PET imaging times may be accurately predicted using list mode reconstructions of longer acquisitions. A threshold CNR may be applied to longer duration images to ensure lesion detectability of shorter duration reconstructions. This method can aid in the selection of lesions to include in novel data augmentation techniques for deep learning.

13.
IEEE Trans Med Imaging ; 42(10): 3117-3126, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37216247

RESUMO

Image segmentation, labeling, and landmark detection are essential tasks for pediatric craniofacial evaluation. Although deep neural networks have been recently adopted to segment cranial bones and locate cranial landmarks from computed tomography (CT) or magnetic resonance (MR) images, they may be hard to train and provide suboptimal results in some applications. First, they seldom leverage global contextual information that can improve object detection performance. Second, most methods rely on multi-stage algorithm designs that are inefficient and prone to error accumulation. Third, existing methods often target simple segmentation tasks and have shown low reliability in more challenging scenarios such as multiple cranial bone labeling in highly variable pediatric datasets. In this paper, we present a novel end-to-end neural network architecture based on DenseNet that incorporates context regularization to jointly label cranial bone plates and detect cranial base landmarks from CT images. Specifically, we designed a context-encoding module that encodes global context information as landmark displacement vector maps and uses it to guide feature learning for both bone labeling and landmark identification. We evaluated our model on a highly diverse pediatric CT image dataset of 274 normative subjects and 239 patients with craniosynostosis (age 0.63 ± 0.54 years, range 0-2 years). Our experiments demonstrate improved performance compared to state-of-the-art approaches.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Humanos , Criança , Recém-Nascido , Lactente , Pré-Escolar , Processamento de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos , Redes Neurais de Computação , Algoritmos
14.
PLoS One ; 18(4): e0284563, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37083575

RESUMO

Network approaches have successfully been used to help reveal complex mechanisms of diseases including Chronic Obstructive Pulmonary Disease (COPD). However despite recent advances, we remain limited in our ability to incorporate protein-protein interaction (PPI) network information with omics data for disease prediction. New deep learning methods including convolution Graph Neural Network (ConvGNN) has shown great potential for disease classification using transcriptomics data and known PPI networks from existing databases. In this study, we first reconstructed the COPD-associated PPI network through the AhGlasso (Augmented High-Dimensional Graphical Lasso Method) algorithm based on one independent transcriptomics dataset including COPD cases and controls. Then we extended the existing ConvGNN methods to successfully integrate COPD-associated PPI, proteomics, and transcriptomics data and developed a prediction model for COPD classification. This approach improves accuracy over several conventional classification methods and neural networks that do not incorporate network information. We also demonstrated that the updated COPD-associated network developed using AhGlasso further improves prediction accuracy. Although deep neural networks often achieve superior statistical power in classification compared to other methods, it can be very difficult to explain how the model, especially graph neural network(s), makes decisions on the given features and identifies the features that contribute the most to prediction generally and individually. To better explain how the spectral-based Graph Neural Network model(s) works, we applied one unified explainable machine learning method, SHapley Additive exPlanations (SHAP), and identified CXCL11, IL-2, CD48, KIR3DL2, TLR2, BMP10 and several other relevant COPD genes in subnetworks of the ConvGNN model for COPD prediction. Finally, Gene Ontology (GO) enrichment analysis identified glycosaminoglycan, heparin signaling, and carbohydrate derivative signaling pathways significantly enriched in the top important gene/proteins for COPD classifications.


Assuntos
Aprendizado Profundo , Doença Pulmonar Obstrutiva Crônica , Humanos , Multiômica , Redes Neurais de Computação , Algoritmos , Doença Pulmonar Obstrutiva Crônica/genética , Proteínas Morfogenéticas Ósseas
15.
Med Image Comput Comput Assist Interv ; 13437: 639-649, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36383499

RESUMO

Due to domain shifts, deep cell/nucleus detection models trained on one microscopy image dataset might not be applicable to other datasets acquired with different imaging modalities. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently been exploited to close domain gaps and has achieved excellent nucleus detection performance. However, current GAN-based UDA model training often requires a large amount of unannotated target data, which may be prohibitively expensive to obtain in real practice. Additionally, these methods have significant performance degradation when using limited target training data. In this paper, we study a more realistic yet challenging UDA scenario, where (unannotated) target training data is very scarce, a low-resource case rarely explored for nucleus detection in previous work. Specifically, we augment a dual GAN network by leveraging a task-specific model to supplement the target-domain discriminator and facilitate generator learning with limited data. The task model is constrained by cross-domain prediction consistency to encourage semantic content preservation for image-to-image translation. Next, we incorporate a stochastic, differentiable data augmentation module into the task-augmented GAN network to further improve model training by alleviating discriminator overfitting. This data augmentation module is a plug-and-play component, requiring no modification of network architectures or loss functions. We evaluate the proposed low-resource UDA method for nucleus detection on multiple public cross-modality microscopy image datasets. With a single training image in the target domain, our method significantly outperforms recent state-of-the-art UDA approaches and delivers very competitive or superior performance over fully supervised models trained with real labeled target data.

16.
J Med Imaging (Bellingham) ; 9(2): 026001, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-35274026

RESUMO

Purpose: An open question in deep clustering is how to explain what in the image is driving the cluster assignments. This is especially important for applications in medical imaging when the derived cluster assignments may inform decision-making or create new disease subtypes. We develop cluster activation mapping (CLAM), which is methodology to create localization maps highlighting the image regions important for cluster assignment. Approach: Our approach uses a linear combination of the activation channels from the last layer of the encoder within a pretrained autoencoder. The activation channels are weighted by a channelwise confidence measure, which is a modification of score-CAM. Results: Our approach performs well under medical imaging-based simulation experiments, when the image clusters differ based on size, location, and intensity of abnormalities. Under simulation, the cluster assignments were predicted with 100% accuracy when the number of clusters was set at the true value. In addition, applied to computed tomography scans from a sarcoidosis population, CLAM identified two subtypes of sarcoidosis based purely on CT scan presentation, which were significantly associated with pulmonary function tests and visual assessment scores, such as ground-glass, fibrosis, and honeycombing. Conclusions: CLAM is a transparent methodology for identifying explainable groupings of medical imaging data. As deep learning networks are often criticized and not trusted due to their lack of interpretability, our contribution of CLAM to deep clustering architectures is critical to our understanding of cluster assignments, which can ultimately lead to new subtypes of diseases.

17.
Cancers (Basel) ; 14(10)2022 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35626003

RESUMO

Identifying the progression of chronic lymphocytic leukemia (CLL) to accelerated CLL (aCLL) or transformation to diffuse large B-cell lymphoma (Richter transformation; RT) has significant clinical implications as it prompts a major change in patient management. However, the differentiation between these disease phases may be challenging in routine practice. Unsupervised learning has gained increased attention because of its substantial potential in data intrinsic pattern discovery. Here, we demonstrate that cellular feature engineering, identifying cellular phenotypes via unsupervised clustering, provides the most robust analytic performance in analyzing digitized pathology slides (accuracy = 0.925, AUC = 0.978) when compared to alternative approaches, such as mixed features, supervised features, unsupervised/mixed/supervised feature fusion and selection, as well as patch-based convolutional neural network (CNN) feature extraction. We further validate the reproducibility and robustness of unsupervised feature extraction via stability and repeated splitting analysis, supporting its utility as a diagnostic aid in identifying CLL patients with histologic evidence of disease progression. The outcome of this study serves as proof of principle using an unsupervised machine learning scheme to enhance the diagnostic accuracy of the heterogeneous histology patterns that pathologists might not easily see.

18.
JACC Adv ; 1(5): 100153, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38939457

RESUMO

The current era of big data offers a wealth of new opportunities for clinicians to leverage artificial intelligence to optimize care for pediatric and adult patients with a congenital heart disease. At present, there is a significant underutilization of artificial intelligence in the clinical setting for the diagnosis, prognosis, and management of congenital heart disease patients. This document is a call to action and will describe the current state of artificial intelligence in congenital heart disease, review challenges, discuss opportunities, and focus on the top priorities of artificial intelligence-based deployment in congenital heart disease.

19.
EJNMMI Res ; 11(1): 98, 2021 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-34601660

RESUMO

BACKGROUND: Gastroenteropancreatic neuroendocrine tumors most commonly metastasize to the liver; however, high normal background 68Ga-DOTATATE activity and high image noise make metastatic lesions difficult to detect. The purpose of this study is to develop a rapid, automated and highly specific method to identify 68Ga-DOTATATE PET/CT hepatic lesions using a 2D U-Net convolutional neural network. METHODS: A retrospective study of 68Ga-DOTATATE PET/CT patient studies (n = 125; 57 with 68Ga-DOTATATE hepatic lesions and 68 without) was evaluated. The dataset was randomly divided into 75 studies for the training set (36 abnormal, 39 normal), 25 for the validation set (11 abnormal, 14 normal) and 25 for the testing set (11 abnormal, 14 normal). Hepatic lesions were physician annotated using a modified PERCIST threshold, and boundary definition by gradient edge detection. The 2D U-Net was trained independently five times for 100,000 iterations using a linear combination of binary cross-entropy and dice losses with a stochastic gradient descent algorithm. Performance metrics included: positive predictive value (PPV), sensitivity, F1 score and area under the precision-recall curve (PR-AUC). Five different pixel area thresholds were used to filter noisy predictions. RESULTS: A total of 233 lesions were annotated with each abnormal study containing a mean of 4 ± 2.75 lesions. A pixel filter of 20 produced the highest mean PPV 0.94 ± 0.01. A pixel filter of 5 produced the highest mean sensitivity 0.74 ± 0.02. The highest mean F1 score 0.79 ± 0.01 was produced with a 20 pixel filter. The highest mean PR-AUC 0.73 ± 0.03 was produced with a 15 pixel filter. CONCLUSION: Deep neural networks can automatically detect hepatic lesions in 68Ga-DOTATATE PET. Ongoing improvements in data annotation methods, increasing sample sizes and training methods are anticipated to further improve detection performance.

20.
IEEE Trans Med Imaging ; 40(10): 2880-2896, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33284750

RESUMO

Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.


Assuntos
Microscopia , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador , Aprendizado de Máquina Supervisionado
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA