Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Radiology ; 311(2): e230750, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38713024

RESUMO

Background Multiparametric MRI (mpMRI) improves prostate cancer (PCa) detection compared with systematic biopsy, but its interpretation is prone to interreader variation, which results in performance inconsistency. Artificial intelligence (AI) models can assist in mpMRI interpretation, but large training data sets and extensive model testing are required. Purpose To evaluate a biparametric MRI AI algorithm for intraprostatic lesion detection and segmentation and to compare its performance with radiologist readings and biopsy results. Materials and Methods This secondary analysis of a prospective registry included consecutive patients with suspected or known PCa who underwent mpMRI, US-guided systematic biopsy, or combined systematic and MRI/US fusion-guided biopsy between April 2019 and September 2022. All lesions were prospectively evaluated using Prostate Imaging Reporting and Data System version 2.1. The lesion- and participant-level performance of a previously developed cascaded deep learning algorithm was compared with histopathologic outcomes and radiologist readings using sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC). Results A total of 658 male participants (median age, 67 years [IQR, 61-71 years]) with 1029 MRI-visible lesions were included. At histopathologic analysis, 45% (294 of 658) of participants had lesions of International Society of Urological Pathology (ISUP) grade group (GG) 2 or higher. The algorithm identified 96% (282 of 294; 95% CI: 94%, 98%) of all participants with clinically significant PCa, whereas the radiologist identified 98% (287 of 294; 95% CI: 96%, 99%; P = .23). The algorithm identified 84% (103 of 122), 96% (152 of 159), 96% (47 of 49), 95% (38 of 40), and 98% (45 of 46) of participants with ISUP GG 1, 2, 3, 4, and 5 lesions, respectively. In the lesion-level analysis using radiologist ground truth, the detection sensitivity was 55% (569 of 1029; 95% CI: 52%, 58%), and the PPV was 57% (535 of 934; 95% CI: 54%, 61%). The mean number of false-positive lesions per participant was 0.61 (range, 0-3). The lesion segmentation DSC was 0.29. Conclusion The AI algorithm detected cancer-suspicious lesions on biparametric MRI scans with a performance comparable to that of an experienced radiologist. Moreover, the algorithm reliably predicted clinically significant lesions at histopathologic examination. ClinicalTrials.gov Identifier: NCT03354416 © RSNA, 2024 Supplemental material is available for this article.


Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética Multiparamétrica , Neoplasias da Próstata , Masculino , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Idoso , Estudos Prospectivos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Pessoa de Meia-Idade , Algoritmos , Próstata/diagnóstico por imagem , Próstata/patologia , Biópsia Guiada por Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
2.
Eur Radiol ; 31(5): 3165-3176, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33146796

RESUMO

OBJECTIVES: The early infection dynamics of patients with SARS-CoV-2 are not well understood. We aimed to investigate and characterize associations between clinical, laboratory, and imaging features of asymptomatic and pre-symptomatic patients with SARS-CoV-2. METHODS: Seventy-four patients with RT-PCR-proven SARS-CoV-2 infection were asymptomatic at presentation. All were retrospectively identified from 825 patients with chest CT scans and positive RT-PCR following exposure or travel risks in outbreak settings in Japan and China. CTs were obtained for every patient within a day of admission and were reviewed for infiltrate subtypes and percent with assistance from a deep learning tool. Correlations of clinical, laboratory, and imaging features were analyzed and comparisons were performed using univariate and multivariate logistic regression. RESULTS: Forty-eight of 74 (65%) initially asymptomatic patients had CT infiltrates that pre-dated symptom onset by 3.8 days. The most common CT infiltrates were ground glass opacities (45/48; 94%) and consolidation (22/48; 46%). Patient body temperature (p < 0.01), CRP (p < 0.01), and KL-6 (p = 0.02) were associated with the presence of CT infiltrates. Infiltrate volume (p = 0.01), percent lung involvement (p = 0.01), and consolidation (p = 0.043) were associated with subsequent development of symptoms. CONCLUSIONS: COVID-19 CT infiltrates pre-dated symptoms in two-thirds of patients. Body temperature elevation and laboratory evaluations may identify asymptomatic patients with SARS-CoV-2 CT infiltrates at presentation, and the characteristics of CT infiltrates could help identify asymptomatic SARS-CoV-2 patients who subsequently develop symptoms. The role of chest CT in COVID-19 may be illuminated by a better understanding of CT infiltrates in patients with early disease or SARS-CoV-2 exposure. KEY POINTS: • Forty-eight of 74 (65%) pre-selected asymptomatic patients with SARS-CoV-2 had abnormal chest CT findings. • CT infiltrates pre-dated symptom onset by 3.8 days (range 1-5). • KL-6, CRP, and elevated body temperature identified patients with CT infiltrates. Higher infiltrate volume, percent lung involvement, and pulmonary consolidation identified patients who developed symptoms.


Assuntos
COVID-19 , SARS-CoV-2 , China/epidemiologia , Surtos de Doenças , Humanos , Japão , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
3.
AJR Am J Roentgenol ; 215(6): 1403-1410, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33052737

RESUMO

OBJECTIVE. Deep learning applications in radiology often suffer from overfitting, limiting generalization to external centers. The objective of this study was to develop a high-quality prostate segmentation model capable of maintaining a high degree of performance across multiple independent datasets using transfer learning and data augmentation. MATERIALS AND METHODS. A retrospective cohort of 648 patients who underwent prostate MRI between February 2015 and November 2018 at a single center was used for training and validation. A deep learning approach combining 2D and 3D architecture was used for training, which incorporated transfer learning. A data augmentation strategy was used that was specific to the deformations, intensity, and alterations in image quality seen on radiology images. Five independent datasets, four of which were from outside centers, were used for testing, which was conducted with and without fine-tuning of the original model. The Dice similarity coefficient was used to evaluate model performance. RESULTS. When prostate segmentation models utilizing transfer learning were applied to the internal validation cohort, the mean Dice similarity coefficient was 93.1 for whole prostate and 89.0 for transition zone segmentations. When the models were applied to multiple test set cohorts, the improvement in performance achieved using data augmentation alone was 2.2% for the whole prostate models and 3.0% for the transition zone segmentation models. However, the best test-set results were obtained with models fine-tuned on test center data with mean Dice similarity coefficients of 91.5 for whole prostate segmentation and 89.7 for transition zone segmentation. CONCLUSION. Transfer learning allowed for the development of a high-performing prostate segmentation model, and data augmentation and fine-tuning approaches improved performance of a prostate segmentation model when applied to datasets from external centers.


Assuntos
Imageamento por Ressonância Magnética , Reconhecimento Automatizado de Padrão , Neoplasias da Próstata/diagnóstico por imagem , Conjuntos de Dados como Assunto , Aprendizado Profundo , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
5.
Opt Express ; 22(12): 14871-84, 2014 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-24977582

RESUMO

We implemented the graphics processing unit (GPU) accelerated compressive sensing (CS) non-uniform in k-space spectral domain optical coherence tomography (SD OCT). Kaiser-Bessel (KB) function and Gaussian function are used independently as the convolution kernel in the gridding-based non-uniform fast Fourier transform (NUFFT) algorithm with different oversampling ratios and kernel widths. Our implementation is compared with the GPU-accelerated modified non-uniform discrete Fourier transform (MNUDFT) matrix-based CS SD OCT and the GPU-accelerated fast Fourier transform (FFT)-based CS SD OCT. It was found that our implementation has comparable performance to the GPU-accelerated MNUDFT-based CS SD OCT in terms of image quality while providing more than 5 times speed enhancement. When compared to the GPU-accelerated FFT based-CS SD OCT, it shows smaller background noise and less side lobes while eliminating the need for the cumbersome k-space grid filling and the k-linear calibration procedure. Finally, we demonstrated that by using a conventional desktop computer architecture having three GPUs, real-time B-mode imaging can be obtained in excess of 30 fps for the GPU-accelerated NUFFT based CS SD OCT with frame size 2048(axial) × 1,000(lateral).


Assuntos
Algoritmos , Sistemas Computacionais , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional , Tomografia de Coerência Óptica/métodos , Gráficos por Computador , Análise de Fourier , Software
6.
Opt Lett ; 39(1): 76-9, 2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24365826

RESUMO

We developed and demonstrated real-time compressive sensing (CS) spectral domain optical coherence tomography (SD-OCT) B-mode imaging at excess of 70 fps. The system was implemented using a conventional desktop computer architecture having three graphics processing units. This result shows speed gain of 459 and 112 times compared to the best CS implementations based on the MATLAB and C++, respectively, and that real-time CS SD-OCT imaging can finally be realized.


Assuntos
Tomografia de Coerência Óptica/métodos , Algoritmos , Processamento de Imagem Assistida por Computador , Fatores de Tempo
7.
J Opt Soc Am A Opt Image Sci Vis ; 31(9): 2064-9, 2014 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-25401447

RESUMO

In this work, we propose a novel dispersion compensation method that enables real-time compressive sensing (CS) spectral domain optical coherence tomography (SD OCT) image reconstruction. We show that dispersion compensation can be incorporated into CS SD OCT by multiplying the dispersion-correcting terms by the undersampled spectral data before CS reconstruction. High-quality SD OCT imaging with dispersion compensation was demonstrated at a speed in excess of 70 frames per s using 40% of the spectral measurements required by the well-known Shannon/Nyquist theory. The data processing and image display were performed on a conventional workstation having three graphics processing units.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Tomografia de Coerência Óptica , Benchmarking , Citrus sinensis/citologia , Gráficos por Computador , Humanos , Pele/citologia , Fatores de Tempo
8.
Med Image Anal ; 95: 103207, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38776843

RESUMO

The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models, considering that manual annotation is extremely expensive and time-consuming. To address this problem, we present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models that aim at reducing the time required to annotate radiology datasets. Through MONAI Label, researchers can develop AI annotation applications focusing on their domain of expertise. It allows researchers to readily deploy their apps as services, which can be made available to clinicians via their preferred user interface. Currently, MONAI Label readily supports locally installed (3D Slicer) and web-based (OHIF) frontends and offers two active learning strategies to facilitate and speed up the training of segmentation algorithms. MONAI Label allows researchers to make incremental improvements to their AI-based annotation application by making them available to other researchers and clinicians alike. Additionally, MONAI Label provides sample AI-based interactive and non-interactive labeling applications, that can be used directly off the shelf, as plug-and-play to any given dataset. Significant reduced annotation times using the interactive model can be observed on two public datasets.


Assuntos
Inteligência Artificial , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Algoritmos , Software
9.
Abdom Radiol (NY) ; 49(5): 1545-1556, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38512516

RESUMO

OBJECTIVE: Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS: A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS: 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION: Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.


Assuntos
Algoritmos , Inteligência Artificial , Imageamento por Ressonância Magnética , Neoplasias da Próstata , Humanos , Masculino , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Neoplasias da Próstata/patologia , Interpretação de Imagem Assistida por Computador/métodos , Pessoa de Meia-Idade , Idoso , Próstata/diagnóstico por imagem , Aprendizado Profundo
10.
Acad Radiol ; 31(6): 2424-2433, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38262813

RESUMO

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.


Assuntos
Neoplasias Ósseas , Aprendizado Profundo , Estadiamento de Neoplasias , Neoplasias da Próstata , Tomografia Computadorizada por Raios X , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Neoplasias Ósseas/diagnóstico por imagem , Neoplasias Ósseas/secundário , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Idoso , Pessoa de Meia-Idade , Interpretação de Imagem Radiográfica Assistida por Computador/métodos
11.
IEEE Trans Med Imaging ; 42(7): 2044-2056, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37021996

RESUMO

Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in FL use-cases where the clients' training involves updating the Batch Normalization (BN) statistics and provide a new baseline attack that works for such scenarios. Furthermore, we present new ways to measure and visualize potential data leakage in FL. Our work is a step towards establishing reproducible methods of measuring data leakage in FL and could help determine the optimal tradeoffs between privacy-preserving techniques, such as differential privacy, and model accuracy based on quantifiable metrics.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Humanos , Privacidade , Informática Médica
12.
Med Image Anal ; 88: 102833, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37267773

RESUMO

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.


Assuntos
Processamento de Imagem Assistida por Computador , Substância Branca , Gravidez , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Feto/diagnóstico por imagem , Algoritmos , Imageamento por Ressonância Magnética/métodos
13.
Opt Lett ; 37(20): 4209-11, 2012 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-23073413

RESUMO

We study noise reduction using modified compressive sensing optical coherence tomography. We show that averaged modified compressed sensing (CS) reconstruction achieves better image quality in terms of signal-to-noise ratio, local contrast, and contrast-to-noise ratio, compared to the classical averaging method while reducing the total amount of data required to reconstruct the images. The same is also true when compared with standard CS-based averaging method with the same amount of undersampled data.


Assuntos
Aumento da Imagem/métodos , Tomografia de Coerência Óptica/métodos , Imagens de Fantasmas
14.
Abdom Radiol (NY) ; 47(4): 1425-1434, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35099572

RESUMO

PURPOSE: To present fully automated DL-based prostate cancer detection system for prostate MRI. METHODS: MRI scans from two institutions, were used for algorithm training, validation, testing. MRI-visible lesions were contoured by an experienced radiologist. All lesions were biopsied using MRI-TRUS-guidance. Lesions masks, histopathological results were used as ground truth labels to train UNet, AH-Net architectures for prostate cancer lesion detection, segmentation. Algorithm was trained to detect any prostate cancer ≥ ISUP1. Detection sensitivity, positive predictive values, mean number of false positive lesions per patient were used as performance metrics. RESULTS: 525 patients were included for training, validation, testing of the algorithm. Dataset was split into training (n = 368, 70%), validation (n = 79, 15%), test (n = 78, 15%) cohorts. Dice coefficients in training, validation sets were 0.403, 0.307, respectively, for AHNet model compared to 0.372, 0.287, respectively, for UNet model. In validation set, detection sensitivity was 70.9%, PPV was 35.5%, mean number of false positive lesions/patient was 1.41 (range 0-6) for UNet model compared to 74.4% detection sensitivity, 47.8% PPV, mean number of false positive lesions/patient was 0.87 (range 0-5) for AHNet model. In test set, detection sensitivity for UNet was 72.8% compared to 63.0% for AHNet, mean number of false positive lesions/patient was 1.90 (range 0-7), 1.40 (range 0-6) in UNet, AHNet models, respectively. CONCLUSION: We developed a DL-based AI approach which predicts prostate cancer lesions at biparametric MRI with reasonable performance metrics. While false positive lesion calls remain as a challenge of AI-assisted detection algorithms, this system can be utilized as an adjunct tool by radiologists.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia
15.
Acad Radiol ; 29(8): 1159-1168, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-34598869

RESUMO

RATIONALE AND OBJECTIVES: Prostate MRI improves detection of clinically significant prostate cancer; however, its diagnostic performance has wide variation. Artificial intelligence (AI) has the potential to assist radiologists in the detection and classification of prostatic lesions. Herein, we aimed to develop and test a cascaded deep learning detection and classification system trained on biparametric prostate MRI using PI-RADS for assisting radiologists during prostate MRI read out. MATERIALS AND METHODS: T2-weighted, diffusion-weighted (ADC maps, high b value DWI) MRI scans obtained at 3 Tesla from two institutions (n = 1043 in-house and n = 347 Prostate-X, respectively) acquired between 2015 to 2019 were used for model training, validation, testing. All scans were retrospectively reevaluated by one radiologist. Suspicious lesions were contoured and assigned a PI-RADS category. A 3D U-Net-based deep neural network was used to train an algorithm for automated detection and segmentation of prostate MRI lesions. Two 3D residual neural network were used for a 4-class classification task to predict PI-RADS categories 2 to 5 and BPH. Training and validation used 89% (n = 1290 scans) of the data using 5 fold cross-validation, the remaining 11% (n = 150 scans) were used for independent testing. Algorithm performance at lesion level was assessed using sensitivities, positive predictive values (PPV), false discovery rates (FDR), classification accuracy, Dice similarity coefficient (DSC). Additional analysis was conducted to compare AI algorithm's lesion detection performance with targeted biopsy results. RESULTS: Median age was 66 years (IQR = 60-71), PSA 6.7 ng/ml (IQR = 4.7-9.9) from in-house cohort. In the independent test set, algorithm correctly detected 111 of 198 lesions leading to 56.1% (49.3%-62.6%) sensitivity. PPV was 62.7% (95% CI 54.7%-70.7%) with FDR of 37.3% (95% CI 29.3%-45.3%). Of 79 true positive lesions, 82.3% were tumor positive at targeted biopsy, whereas of 57 false negative lesions, 50.9% were benign at targeted biopsy. Median DSC for lesion segmentation was 0.359. Overall PI-RADS classification accuracy was 30.8% (95% CI 24.6%-37.8%). CONCLUSION: Our cascaded U-Net, residual network architecture can detect, classify cancer suspicious lesions at prostate MRI with good detection, reasonable classification performance metrics.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Idoso , Algoritmos , Inteligência Artificial , Humanos , Imageamento por Ressonância Magnética , Masculino , Próstata/diagnóstico por imagem , Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Estudos Retrospectivos
16.
Med Image Anal ; 82: 102605, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36156419

RESUMO

Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge - 2020.


Assuntos
COVID-19 , Pandemias , Humanos , COVID-19/diagnóstico por imagem , Inteligência Artificial , Tomografia Computadorizada por Raios X/métodos , Pulmão/diagnóstico por imagem
17.
IEEE Trans Med Imaging ; 40(10): 2534-2547, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33373298

RESUMO

Active learning is a unique abstraction of machine learning techniques where the model/algorithm could guide users for annotation of a set of data points that would be beneficial to the model, unlike passive machine learning. The primary advantage being that active learning frameworks select data points that can accelerate the learning process of a model and can reduce the amount of data needed to achieve full accuracy as compared to a model trained on a randomly acquired data set. Multiple frameworks for active learning combined with deep learning have been proposed, and the majority of them are dedicated to classification tasks. Herein, we explore active learning for the task of segmentation of medical imaging data sets. We investigate our proposed framework using two datasets: 1.) MRI scans of the hippocampus, 2.) CT scans of pancreas and tumors. This work presents a query-by-committee approach for active learning where a joint optimizer is used for the committee. At the same time, we propose three new strategies for active learning: 1.) increasing frequency of uncertain data to bias the training data set; 2.) Using mutual information among the input images as a regularizer for acquisition to ensure diversity in the training dataset; 3.) adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD). The results indicate an improvement in terms of data reduction by achieving full accuracy while only using 22.69% and 48.85% of the available data for each dataset, respectively.


Assuntos
Imageamento por Ressonância Magnética , Algoritmos , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Incerteza
18.
IEEE Trans Med Imaging ; 40(4): 1113-1122, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33351753

RESUMO

Multi-domain data are widely leveraged in vision applications taking advantage of complementary information from different modalities, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to possible data corruption and different imaging protocols, the availability of images for each domain could vary amongst multiple data sources in practice, which makes it challenging to build a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the random missing domain(s) data in real applications. Specifically, we develop a novel multi-domain image completion method that utilizes a generative adversarial network (GAN) with a representational disentanglement scheme to extract shared content encoding and separate style encoding across multiple domains. We further illustrate that the learned representation in multi-domain image completion could be leveraged for high-level tasks, e.g., segmentation, by introducing a unified framework consisting of image completion and segmentation with a shared content encoder. The experiments demonstrate consistent performance improvement on three datasets for brain tumor segmentation, prostate segmentation, and facial expression image completion respectively.


Assuntos
Neoplasias Encefálicas , Processamento de Imagem Assistida por Computador , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Masculino
19.
Med Image Anal ; 70: 101992, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33601166

RESUMO

The recent outbreak of Coronavirus Disease 2019 (COVID-19) has led to urgent needs for reliable diagnosis and management of SARS-CoV-2 infection. The current guideline is using RT-PCR for testing. As a complimentary tool with diagnostic imaging, chest Computed Tomography (CT) has been shown to be able to reveal visual patterns characteristic for COVID-19, which has definite value at several stages during the disease course. To facilitate CT analysis, recent efforts have focused on computer-aided characterization and diagnosis with chest CT scan, which has shown promising results. However, domain shift of data across clinical data centers poses a serious challenge when deploying learning-based models. A common way to alleviate this issue is to fine-tune the model locally with the target domains local data and annotations. Unfortunately, the availability and quality of local annotations usually varies due to heterogeneity in equipment and distribution of medical resources across the globe. This impact may be pronounced in the detection of COVID-19, since the relevant patterns vary in size, shape, and texture. In this work, we attempt to find a solution for this challenge via federated and semi-supervised learning. A multi-national database consisting of 1704 scans from three countries is adopted to study the performance gap, when training a model with one dataset and applying it to another. Expert radiologists manually delineated 945 scans for COVID-19 findings. In handling the variability in both the data and annotations, a novel federated semi-supervised learning technique is proposed to fully utilize all available data (with or without annotations). Federated learning avoids the need for sensitive data-sharing, which makes it favorable for institutions and nations with strict regulatory policy on data privacy. Moreover, semi-supervision potentially reduces the annotation burden under a distributed setting. The proposed framework is shown to be effective compared to fully supervised scenarios with conventional data sharing instead of model weight sharing.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado de Máquina Supervisionado , Tomografia Computadorizada por Raios X , China , Humanos , Itália , Japão
20.
J Am Med Inform Assoc ; 28(6): 1259-1264, 2021 06 12.
Artigo em Inglês | MEDLINE | ID: mdl-33537772

RESUMO

OBJECTIVE: To demonstrate enabling multi-institutional training without centralizing or sharing the underlying physical data via federated learning (FL). MATERIALS AND METHODS: Deep learning models were trained at each participating institution using local clinical data, and an additional model was trained using FL across all of the institutions. RESULTS: We found that the FL model exhibited superior performance and generalizability to the models trained at single institutions, with an overall performance level that was significantly better than that of any of the institutional models alone when evaluated on held-out test sets from each institution and an outside challenge dataset. DISCUSSION: The power of FL was successfully demonstrated across 3 academic institutions while avoiding the privacy risk associated with the transfer and pooling of patient data. CONCLUSION: Federated learning is an effective methodology that merits further study to enable accelerated development of models across institutions, enabling greater generalizability in clinical use.


Assuntos
Aprendizado Profundo , Disseminação de Informação , Humanos , Privacidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA