Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
1.
Mod Pathol ; 34(2): 478-489, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32884130

RESUMO

Phosphatase and tensin homolog (PTEN) loss is associated with adverse outcomes in prostate cancer and has clinical potential as a prognostic biomarker. The objective of this work was to develop an artificial intelligence (AI) system for automated detection and localization of PTEN loss on immunohistochemically (IHC) stained sections. PTEN loss was assessed using IHC in two prostate tissue microarrays (TMA) (internal cohort, n = 272 and external cohort, n = 129 patients). TMA cores were visually scored for PTEN loss by pathologists and, if present, spatially annotated. Cores from each patient within the internal TMA cohort were split into 90% cross-validation (N = 2048) and 10% hold-out testing (N = 224) sets. ResNet-101 architecture was used to train core-based classification using a multi-resolution ensemble approach (×5, ×10, and ×20). For spatial annotations, single resolution pixel-based classification was trained from patches extracted at ×20 resolution, interpolated to ×40 resolution, and applied in a sliding-window fashion. A final AI-based prediction model was created from combining multi-resolution and pixel-based models. Performance was evaluated in 428 cores of external cohort. From both cohorts, a total of 2700 cores were studied, with a frequency of PTEN loss of 14.5% in internal (180/1239) and external 13.5% (43/319) cancer cores. The final AI-based prediction of PTEN status demonstrated 98.1% accuracy (95.0% sensitivity, 98.4% specificity; median dice score = 0.811) in internal cohort cross-validation set and 99.1% accuracy (100% sensitivity, 99.0% specificity; median dice score = 0.804) in internal cohort test set. Overall core-based classification in the external cohort was significantly improved in the external cohort (area under the curve = 0.964, 90.6% sensitivity, 95.7% specificity) when further trained (fine-tuned) using 15% of cohort data (19/124 patients). These results demonstrate a robust and fully automated method for detection and localization of PTEN loss in prostate cancer tissue samples. AI-based algorithms have potential to streamline sample assessment in research and clinical laboratories.


Assuntos
Biomarcadores Tumorais/análise , Aprendizado Profundo , PTEN Fosfo-Hidrolase/análise , Neoplasias da Próstata , Algoritmos , Estudos de Coortes , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Análise Serial de Tecidos
2.
Eur J Radiol ; 170: 111259, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38128256

RESUMO

PURPOSE: To evaluate CycleGAN's ability to enhance T2-weighted image (T2WI) quality. METHOD: A CycleGAN algorithm was used to enhance T2WI quality. 96 patients (192 scans) were identified from patients who underwent multiple axial T2WI due to poor quality on the first attempt (RAD1) and improved quality on re-acquisition (RAD2). CycleGAN algorithm gave DL classifier scores (0-1) for quality quantification and produced enhanced versions of QI1 and QI2 from RAD1 and RAD2, respectively. A subset (n = 20 patients) was selected for a blinded, multi-reader study, where four radiologists rated T2WI on a scale of 1-4 for quality. The multi-reader study presented readers with 60 image pairs (RAD1 vs RAD2, RAD1 vs QI1, and RAD2 vs QI2), allowing for selecting sequence preferences and quantifying the quality changes. RESULTS: The DL classifier correctly discerned 71.9 % of quality classes, with 90.6 % (96/106) as poor quality and 48.8 % (42/86) as diagnostic in original sequences (RAD1, RAD2). CycleGAN images (QI1, QI2) demonstrated quantitative improvements, with consistently higher DL classifier scores than original scans (p < 0.001). In the multi-reader analysis, CycleGAN demonstrated no qualitative improvements, with diminished overall quality and motion in QI2 in most patients compared to RAD2, with noise levels remaining similar (8/20). No readers preferred QI2 to RAD2 for diagnosis. CONCLUSION: Despite quantitative enhancements with CycleGAN, there was no qualitative boost in T2WI diagnostic quality, noise, or motion. Expert radiologists didn't favor CycleGAN images over standard scans, highlighting the divide between quantitative and qualitative metrics.


Assuntos
Aprendizado Profundo , Humanos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Imageamento por Ressonância Magnética/métodos
3.
Abdom Radiol (NY) ; 47(4): 1425-1434, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35099572

RESUMO

PURPOSE: To present fully automated DL-based prostate cancer detection system for prostate MRI. METHODS: MRI scans from two institutions, were used for algorithm training, validation, testing. MRI-visible lesions were contoured by an experienced radiologist. All lesions were biopsied using MRI-TRUS-guidance. Lesions masks, histopathological results were used as ground truth labels to train UNet, AH-Net architectures for prostate cancer lesion detection, segmentation. Algorithm was trained to detect any prostate cancer ≥ ISUP1. Detection sensitivity, positive predictive values, mean number of false positive lesions per patient were used as performance metrics. RESULTS: 525 patients were included for training, validation, testing of the algorithm. Dataset was split into training (n = 368, 70%), validation (n = 79, 15%), test (n = 78, 15%) cohorts. Dice coefficients in training, validation sets were 0.403, 0.307, respectively, for AHNet model compared to 0.372, 0.287, respectively, for UNet model. In validation set, detection sensitivity was 70.9%, PPV was 35.5%, mean number of false positive lesions/patient was 1.41 (range 0-6) for UNet model compared to 74.4% detection sensitivity, 47.8% PPV, mean number of false positive lesions/patient was 0.87 (range 0-5) for AHNet model. In test set, detection sensitivity for UNet was 72.8% compared to 63.0% for AHNet, mean number of false positive lesions/patient was 1.90 (range 0-7), 1.40 (range 0-6) in UNet, AHNet models, respectively. CONCLUSION: We developed a DL-based AI approach which predicts prostate cancer lesions at biparametric MRI with reasonable performance metrics. While false positive lesion calls remain as a challenge of AI-assisted detection algorithms, this system can be utilized as an adjunct tool by radiologists.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
4.
Acad Radiol ; 29(8): 1159-1168, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-34598869

RESUMO

RATIONALE AND OBJECTIVES: Prostate MRI improves detection of clinically significant prostate cancer; however, its diagnostic performance has wide variation. Artificial intelligence (AI) has the potential to assist radiologists in the detection and classification of prostatic lesions. Herein, we aimed to develop and test a cascaded deep learning detection and classification system trained on biparametric prostate MRI using PI-RADS for assisting radiologists during prostate MRI read out. MATERIALS AND METHODS: T2-weighted, diffusion-weighted (ADC maps, high b value DWI) MRI scans obtained at 3 Tesla from two institutions (n = 1043 in-house and n = 347 Prostate-X, respectively) acquired between 2015 to 2019 were used for model training, validation, testing. All scans were retrospectively reevaluated by one radiologist. Suspicious lesions were contoured and assigned a PI-RADS category. A 3D U-Net-based deep neural network was used to train an algorithm for automated detection and segmentation of prostate MRI lesions. Two 3D residual neural network were used for a 4-class classification task to predict PI-RADS categories 2 to 5 and BPH. Training and validation used 89% (n = 1290 scans) of the data using 5 fold cross-validation, the remaining 11% (n = 150 scans) were used for independent testing. Algorithm performance at lesion level was assessed using sensitivities, positive predictive values (PPV), false discovery rates (FDR), classification accuracy, Dice similarity coefficient (DSC). Additional analysis was conducted to compare AI algorithm's lesion detection performance with targeted biopsy results. RESULTS: Median age was 66 years (IQR = 60-71), PSA 6.7 ng/ml (IQR = 4.7-9.9) from in-house cohort. In the independent test set, algorithm correctly detected 111 of 198 lesions leading to 56.1% (49.3%-62.6%) sensitivity. PPV was 62.7% (95% CI 54.7%-70.7%) with FDR of 37.3% (95% CI 29.3%-45.3%). Of 79 true positive lesions, 82.3% were tumor positive at targeted biopsy, whereas of 57 false negative lesions, 50.9% were benign at targeted biopsy. Median DSC for lesion segmentation was 0.359. Overall PI-RADS classification accuracy was 30.8% (95% CI 24.6%-37.8%). CONCLUSION: Our cascaded U-Net, residual network architecture can detect, classify cancer suspicious lesions at prostate MRI with good detection, reasonable classification performance metrics.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Idoso , Algoritmos , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética , Masculino , Próstata/diagnóstico por imagem , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Estudos Retrospectivos
5.
J Med Imaging (Bellingham) ; 8(1): 010901, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33426151

RESUMO

Purpose: Deep learning has achieved major breakthroughs during the past decade in almost every field. There are plenty of publicly available algorithms, each designed to address a different task of computer vision in general. However, most of these algorithms cannot be directly applied to images in the medical domain. Herein, we are focused on the required preprocessing steps that should be applied to medical images prior to deep neural networks. Approach: To be able to employ the publicly available algorithms for clinical purposes, we must make a meaningful pixel/voxel representation from medical images which facilitates the learning process. Based on the ultimate goal expected from an algorithm (classification, detection, or segmentation), one may infer the required pre-processing steps that can ideally improve the performance of that algorithm. Required pre-processing steps for computed tomography (CT) and magnetic resonance (MR) images in their correct order are discussed in detail. We further supported our discussion by relevant experiments to investigate the efficiency of the listed preprocessing steps. Results: Our experiments confirmed how using appropriate image pre-processing in the right order can improve the performance of deep neural networks in terms of better classification and segmentation. Conclusions: This work investigates the appropriate pre-processing steps for CT and MR images of prostate cancer patients, supported by several experiments that can be useful for educating those new to the field (https://github.com/NIH-MIP/Radiology_Image_Preprocessing_for_Deep_Learning).

6.
IEEE Access ; 9: 87531-87542, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34733603

RESUMO

In this study, we formulated an efficient deep learning-based classification strategy for characterizing metastatic bone lesions using computed tomography scans (CTs) of prostate cancer patients. For this purpose, 2,880 annotated bone lesions from CT scans of 114 patients diagnosed with prostate cancer were used for training, validation, and final evaluation. These annotations were in the form of lesion full segmentation, lesion type and labels of either benign or malignant. In this work, we present our approach in developing the state-of-the-art model to classify bone lesions as benign or malignant, where (1) we introduce a valuable dataset to address a clinically important problem, (2) we increase the reliability of our model by patient-level stratification of our dataset following lesion-aware distribution at each of the training, validation, and test splits, (3) we explore the impact of lesion texture, morphology, size, location, and volumetric information on the classification performance, (4) we investigate the functionality of lesion classification using different algorithms including lesion-based average 2D ResNet-50, lesion-based average 2D ResNeXt-50, 3D ResNet-18, 3D ResNet-50, as well as the ensemble of 2D ResNet-50 and 3D ResNet-18. For this purpose, we employed a train/validation/test split equal to 75%/12%/13% with several data augmentation methods applied to the training dataset to avoid overfitting and to increase reliability. We achieved an accuracy of 92.2% for correct classification of benign vs. malignant bone lesions in the test set using an ensemble of lesion-based average 2D ResNet-50 and 3D ResNet-18 with texture, volumetric information, and morphology having the greatest discriminative power respectively. To the best of our knowledge, this is the highest ever achieved lesion-level accuracy having a very comprehensive data set for such a clinically important problem. This level of classification performance in the early stages of metastasis development bodes well for clinical translation of this strategy.

7.
IEEE Trans Med Imaging ; 39(6): 2061-2075, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31905134

RESUMO

We propose a new method of instance-level microtubule (MT) tracking in time-lapse image series using recurrent attention. Our novel deep learning algorithm segments individual MTs at each frame. Segmentation results from successive frames are used to assign correspondences among MTs. This ultimately generates a distinct path trajectory for each MT through the frames. Based on these trajectories, we estimate MT velocities. To validate our proposed technique, we conduct experiments using real and simulated data. We use statistics derived from real time-lapse series of MT gliding assays to simulate realistic MT time-lapse image series in our simulated data. This data set is employed as pre-training and hyperparameter optimization for our network before training on the real data. Our experimental results show that the proposed supervised learning algorithm improves the precision for MT instance velocity estimation drastically to 71.3% from the baseline result (29.3%). We also demonstrate how the inclusion of temporal information into our deep network can reduce the false negative rates from 67.8% (baseline) down to 28.7% (proposed). Our findings in this work are expected to help biologists characterize the spatial arrangement of MTs, specifically the effects of MT-MT interactions.


Assuntos
Algoritmos , Microtúbulos
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1624-1628, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018306

RESUMO

Abdominal fat quantification is critical since multiple vital organs are located within this region. Although computed tomography (CT) is a highly sensitive modality to segment body fat, it involves ionizing radiations which makes magnetic resonance imaging (MRI) a preferable alternative for this purpose. Additionally, the superior soft tissue contrast in MRI could lead to more accurate results. Yet, it is highly labor intensive to segment fat in MRI scans. In this study, we propose an algorithm based on deep learning technique(s) to automatically quantify fat tissue from MR images through a cross modality adaptation. Our method does not require supervised labeling of MR scans, instead, we utilize a cycle generative adversarial network (C-GAN) to construct a pipeline that transforms the existing MR scans into their equivalent synthetic CT (s-CT) images where fat segmentation is relatively easier due to the descriptive nature of HU (hounsfield unit) in CT images. The fat segmentation results for MRI scans were evaluated by expert radiologist. Qualitative evaluation of our segmentation results shows average success score of 3.80/5 and 4.54/5 for visceral and subcutaneous fat segmentation in MR images*.


Assuntos
Abdome , Cavidade Abdominal , Abdome/diagnóstico por imagem , Tecido Adiposo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA