Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Radiol Artif Intell ; 6(5): e230521, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39166972

RESUMEN

Purpose To determine whether the unsupervised domain adaptation (UDA) method with generated images improves the performance of a supervised learning (SL) model for prostate cancer (PCa) detection using multisite biparametric (bp) MRI datasets. Materials and Methods This retrospective study included data from 5150 patients (14 191 samples) collected across nine different imaging centers. A novel UDA method using a unified generative model was developed for PCa detection using multisite bpMRI datasets. This method translates diffusion-weighted imaging (DWI) acquisitions, including apparent diffusion coefficient (ADC) and individual diffusion-weighted (DW) images acquired using various b values, to align with the style of images acquired using b values recommended by Prostate Imaging Reporting and Data System (PI-RADS) guidelines. The generated ADC and DW images replace the original images for PCa detection. An independent set of 1692 test cases (2393 samples) was used for evaluation. The area under the receiver operating characteristic curve (AUC) was used as the primary metric, and statistical analysis was performed via bootstrapping. Results For all test cases, the AUC values for baseline SL and UDA methods were 0.73 and 0.79 (P < .001), respectively, for PCa lesions with PI-RADS score of 3 or greater and 0.77 and 0.80 (P < .001) for lesions with PI-RADS scores of 4 or greater. In the 361 test cases under the most unfavorable image acquisition setting, the AUC values for baseline SL and UDA were 0.49 and 0.76 (P < .001) for lesions with PI-RADS scores of 3 or greater and 0.50 and 0.77 (P < .001) for lesions with PI-RADS scores of 4 or greater. Conclusion UDA with generated images improved the performance of SL methods in PCa lesion detection across multisite datasets with various b values, especially for images acquired with significant deviations from the PI-RADS-recommended DWI protocol (eg, with an extremely high b value). Keywords: Prostate Cancer Detection, Multisite, Unsupervised Domain Adaptation, Diffusion-weighted Imaging, b Value Supplemental material is available for this article. © RSNA, 2024.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Estudios Retrospectivos , Persona de Mediana Edad , Anciano , Interpretación de Imagen Asistida por Computador/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Imagen de Difusión por Resonancia Magnética/métodos , Próstata/diagnóstico por imagen , Próstata/patología , Imagen por Resonancia Magnética/métodos
2.
J Magn Reson Imaging ; 2024 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-38826142

RESUMEN

BACKGROUND: The number of focal liver lesions (FLLs) detected by imaging has increased worldwide, highlighting the need to develop a robust, objective system for automatically detecting FLLs. PURPOSE: To assess the performance of the deep learning-based artificial intelligence (AI) software in identifying and measuring lesions on contrast-enhanced magnetic resonance imaging (MRI) images in patients with FLLs. STUDY TYPE: Retrospective. SUBJECTS: 395 patients with 1149 FLLs. FIELD STRENGTH/SEQUENCE: The 1.5 T and 3 T scanners, including T1-, T2-, diffusion-weighted imaging, in/out-phase imaging, and dynamic contrast-enhanced imaging. ASSESSMENT: The diagnostic performance of AI, radiologist, and their combination was compared. Using 20 mm as the cut-off value, the lesions were divided into two groups, and then divided into four subgroups: <10, 10-20, 20-40, and ≥40 mm, to evaluate the sensitivity of radiologists and AI in the detection of lesions of different sizes. We compared the pathologic sizes of 122 surgically resected lesions with measurements obtained using AI and those made by radiologists. STATISTICAL TESTS: McNemar test, Bland-Altman analyses, Friedman test, Pearson's chi-squared test, Fisher's exact test, Dice coefficient, and intraclass correlation coefficients. A P-value <0.05 was considered statistically significant. RESULTS: The average Dice coefficient of AI in segmentation of liver lesions was 0.62. The combination of AI and radiologist outperformed the radiologist alone, with a significantly higher detection rate (0.894 vs. 0.825) and sensitivity (0.883 vs. 0.806). The AI showed significantly sensitivity than radiologists in detecting all lesions <20 mm (0.848 vs. 0.788). Both AI and radiologists achieved excellent detection performance for lesions ≥20 mm (0.867 vs. 0.881, P = 0.671). A remarkable agreement existed in the average tumor sizes among the three measurements (P = 0.174). DATA CONCLUSION: AI software based on deep learning exhibited practical value in automatically identifying and measuring liver lesions. TECHNICAL EFFICACY: Stage 2.

3.
Prostate ; 83(9): 871-878, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36959777

RESUMEN

BACKGROUND: Multiparametric MRI (mpMRI) improves the detection of aggressive prostate cancer (PCa) subtypes. As cases of active surveillance (AS) increase and tumor progression triggers definitive treatment, we evaluated whether an AI-driven algorithm can detect clinically significant PCa (csPCa) in patients under AS. METHODS: Consecutive patients under AS who received mpMRI (PI-RADSv2.1 protocol) and subsequent MR-guided ultrasound fusion (targeted and extensive systematic) biopsy between 2017 and 2020 were retrospectively analyzed. Diagnostic performance of an automated clinically certified AI-driven algorithm was evaluated on both lesion and patient level regarding the detection of csPCa. RESULTS: Analysis of 56 patients resulted in 93 target lesions. Patient level sensitivity and specificity of the AI algorithm was 92.5%/31% for the detection of ISUP ≥ 1 and 96.4%/25% for the detection of ISUP ≥ 2, respectively. The only case of csPCa missed by the AI harbored only 1/47 Gleason 7a core (systematic biopsy; previous and subsequent biopsies rendered non-csPCa). CONCLUSIONS: AI-augmented lesion detection and PI-RADS scoring is a robust tool to detect progression to csPCa in patients under AS. Integration in the clinical workflow can serve as reassurance for the reader and streamline reporting, hence improve efficiency and diagnostic confidence.


Asunto(s)
Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Imagen por Resonancia Magnética/métodos , Estudios Retrospectivos , Espera Vigilante , Biopsia Guiada por Imagen/métodos , Inteligencia Artificial
4.
Invest Radiol ; 58(6): 405-412, 2023 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-36728041

RESUMEN

BACKGROUND: Detection of rotator cuff tears, a common cause of shoulder disability, can be time-consuming and subject to reader variability. Deep learning (DL) has the potential to increase radiologist accuracy and consistency. PURPOSE: The aim of this study was to develop a prototype DL model for detection and classification of rotator cuff tears on shoulder magnetic resonance imaging into no tear, partial-thickness tear, or full-thickness tear. MATERIALS AND METHODS: This Health Insurance Portability and Accountability Act-compliant, institutional review board-approved study included a total of 11,925 noncontrast shoulder magnetic resonance imaging scans from 2 institutions, with 11,405 for development and 520 dedicated for final testing. A DL ensemble algorithm was developed that used 4 series as input from each examination: fluid-sensitive sequences in 3 planes and a sagittal oblique T1-weighted sequence. Radiology reports served as ground truth for training with categories of no tear, partial tear, or full-thickness tear. A multireader study was conducted for the test set ground truth, which was determined by the majority vote of 3 readers per case. The ensemble comprised 4 parallel 3D ResNet50 convolutional neural network architectures trained via transfer learning and then adapted to the targeted domain. The final tear-type prediction was determined as the class with the highest probability, after averaging the class probabilities of the 4 individual models. RESULTS: The AUC overall for supraspinatus, infraspinatus, and subscapularis tendon tears was 0.93, 0.89, and 0.90, respectively. The model performed best for full-thickness supraspinatus, infraspinatus, and subscapularis tears with AUCs of 0.98, 0.99, and 0.95, respectively. Multisequence input demonstrated higher AUCs than single-sequence input for infraspinatus and subscapularis tendon tears, whereas coronal oblique fluid-sensitive and multisequence input showed similar AUCs for supraspinatus tendon tears. Model accuracy for tear types and overall accuracy were similar to that of the clinical readers. CONCLUSIONS: Deep learning diagnosis of rotator cuff tears is feasible with excellent diagnostic performance, particularly for full-thickness tears, with model accuracy similar to subspecialty-trained musculoskeletal radiologists.


Asunto(s)
Aprendizaje Profundo , Lesiones del Manguito de los Rotadores , Humanos , Lesiones del Manguito de los Rotadores/diagnóstico por imagen , Lesiones del Manguito de los Rotadores/patología , Hombro , Manguito de los Rotadores/patología , Imagen por Resonancia Magnética/métodos
5.
Cancer Imaging ; 23(1): 6, 2023 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-36647150

RESUMEN

BACKGROUND: Deep-learning-based computer-aided diagnosis (DL-CAD) systems using MRI for prostate cancer (PCa) detection have demonstrated good performance. Nevertheless, DL-CAD systems are vulnerable to high heterogeneities in DWI, which can interfere with DL-CAD assessments and impair performance. This study aims to compare PCa detection of DL-CAD between zoomed-field-of-view echo-planar DWI (z-DWI) and full-field-of-view DWI (f-DWI) and find the risk factors affecting DL-CAD diagnostic efficiency. METHODS: This retrospective study enrolled 354 consecutive participants who underwent MRI including T2WI, f-DWI, and z-DWI because of clinically suspected PCa. A DL-CAD was used to compare the performance of f-DWI and z-DWI both on a patient level and lesion level. We used the area under the curve (AUC) of receiver operating characteristics analysis and alternative free-response receiver operating characteristics analysis to compare the performances of DL-CAD using f- DWI and z-DWI. The risk factors affecting the DL-CAD were analyzed using logistic regression analyses. P values less than 0.05 were considered statistically significant. RESULTS: DL-CAD with z-DWI had a significantly better overall accuracy than that with f-DWI both on patient level and lesion level (AUCpatient: 0.89 vs. 0.86; AUClesion: 0.86 vs. 0.76; P < .001). The contrast-to-noise ratio (CNR) of lesions in DWI was an independent risk factor of false positives (odds ratio [OR] = 1.12; P < .001). Rectal susceptibility artifacts, lesion diameter, and apparent diffusion coefficients (ADC) were independent risk factors of both false positives (ORrectal susceptibility artifact = 5.46; ORdiameter, = 1.12; ORADC = 0.998; all P < .001) and false negatives (ORrectal susceptibility artifact = 3.31; ORdiameter = 0.82; ORADC = 1.007; all P ≤ .03) of DL-CAD. CONCLUSIONS: Z-DWI has potential to improve the detection performance of a prostate MRI based DL-CAD. TRIAL REGISTRATION: ChiCTR, NO. ChiCTR2100041834 . Registered 7 January 2021.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Estudios Retrospectivos , Reproducibilidad de los Resultados , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Imagen por Resonancia Magnética/métodos , Imagen de Difusión por Resonancia Magnética/métodos
6.
J Magn Reson Imaging ; 58(4): 1055-1064, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-36651358

RESUMEN

BACKGROUND: Demand for prostate MRI is increasing, but scan times remain long even in abbreviated biparametric MRIs (bpMRI). Deep learning can be leveraged to accelerate T2-weighted imaging (T2WI). PURPOSE: To compare conventional bpMRIs (CL-bpMRI) with bpMRIs including a deep learning-accelerated T2WI (DL-bpMRI) in diagnosing prostate cancer. STUDY TYPE: Retrospective. POPULATION: Eighty consecutive men, mean age 66 years (47-84) with suspected prostate cancer or prostate cancer on active surveillance who had a prostate MRI from December 28, 2020 to April 28, 2021 were included. Follow-up included prostate biopsy or stability of prostate-specific antigen (PSA) for 1 year. FIELD STRENGTH AND SEQUENCES: A 3 T MRI. Conventional axial and coronal T2 turbo spin echo (CL-T2), 3-fold deep learning-accelerated axial and coronal T2-weighted sequence (DL-T2), diffusion weighted imaging (DWI) with b = 50 sec/mm2 , 1000 sec/mm2 , calculated b = 1500 sec/mm2 . ASSESSMENT: CL-bpMRI and DL-bpMRI including the same conventional diffusion-weighted imaging (DWI) were presented to three radiologists (blinded to acquisition method) and to a deep learning computer-assisted detection algorithm (DL-CAD). The readers evaluated image quality using a 4-point Likert scale (1 = nondiagnostic, 4 = excellent) and graded lesions using Prostate Imaging Reporting and Data System (PI-RADS) v2.1. DL-CAD identified and assigned lesions of PI-RADS 3 or greater. STATISTICAL TESTS: Quality metrics were compared using Wilcoxon signed rank test, and area under the receiver operating characteristic curve (AUC) were compared using Delong's test. SIGNIFICANCE: P = 0.05. RESULTS: Eighty men were included (age: 66 ± 9 years; 17/80 clinically significant prostate cancer). Overall image quality results by the three readers (CL-T2, DL-T2) are reader 1: 3.72 ± 0.53, 3.89 ± 0.39 (P = 0.99); reader 2: 3.33 ± 0.82, 3.31 ± 0.74 (P = 0.49); reader 3: 3.67 ± 0.63, 3.51 ± 0.62. In the patient-based analysis, the reader results of AUC are (CL-bpMRI, DL-bpMRI): reader 1: 0.77, 0.78 (P = 0.98), reader 2: 0.65, 0.66 (P = 0.99), reader 3: 0.57, 0.60 (P = 0.52). Diagnostic statistics from DL-CAD (CL-bpMRI, DL-bpMRI) are sensitivity (0.71, 0.71, P = 1.00), specificity (0.59, 0.44, P = 0.05), positive predictive value (0.23, 0.24, P = 0.25), negative predictive value (0.88, 0.88, P = 0.48). CONCLUSION: Deep learning-accelerated T2-weighted imaging may potentially be used to decrease acquisition time for bpMRI. EVIDENCE LEVEL: 3. TECHNICAL EFFICACY: Stage 2.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Anciano , Persona de Mediana Edad , Imagen por Resonancia Magnética/métodos , Próstata/diagnóstico por imagen , Próstata/patología , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Estudios Retrospectivos
7.
Ultrasonography ; 42(1): 154-164, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36475357

RESUMEN

PURPOSE: The aim of this study was to evaluate the accuracy of prostate volume estimates calculated from the ellipsoid formula using the anteroposterior (AP) diameter measured on axial and sagittal images obtained through ultrasonography (US) and magnetic resonance imaging (MRI). METHODS: This retrospective study included 456 patients with transrectal US and MRI from two university hospitals. Two radiologists independently measured the prostate gland diameters on US and MRI: AP diameters on axial and sagittal images, transverse, and longitudinal diameters on midsagittal images. The volume estimates, volumeax and volumesag, were calculated from the ellipsoid formula by using the AP diameter on axial and sagittal images, respectively. The prostate volume extracted from MRI-based whole-gland segmentation was considered the gold standard. The intraclass correlation coefficient (ICC) was used to evaluate the inter-method agreement between volumeax and volumesag, and agreement with the gold standard. The Wilcoxon signedrank test was used to analyze the differences between the volume estimates and the gold standard. RESULTS: The prostate gland volume estimates showed excellent inter-method agreement, and excellent agreement with the gold standard (ICCs >0.9). Compared with the gold standard, the volume estimates were significantly larger on MRI and significantly smaller on US (P<0.001). The volume difference (segmented volume-volume estimate) was greater in patients with larger prostate glands, especially on US. CONCLUSION: Volumeax and volumesag showed excellent inter-method agreement and excellent agreement with the gold standard on both US and MRI. However, prostate volume was overestimated on MRI and underestimated on US.

8.
Eur Radiol ; 33(1): 64-76, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35900376

RESUMEN

OBJECTIVES: To evaluate the effect of a deep learning-based computer-aided diagnosis (DL-CAD) system on experienced and less-experienced radiologists in reading prostate mpMRI. METHODS: In this retrospective, multi-reader multi-case study, a consecutive set of 184 patients examined between 01/2018 and 08/2019 were enrolled. Ground truth was combined targeted and 12-core systematic transrectal ultrasound-guided biopsy. Four radiologists, two experienced and two less-experienced, evaluated each case twice, once without (DL-CAD-) and once assisted by DL-CAD (DL-CAD+). ROC analysis, sensitivities, specificities, PPV and NPV were calculated to compare the diagnostic accuracy for the diagnosis of prostate cancer (PCa) between the two groups (DL-CAD- vs. DL-CAD+). Spearman's correlation coefficients were evaluated to assess the relationship between PI-RADS category and Gleason score (GS). Also, the median reading times were compared for the two reading groups. RESULTS: In total, 172 patients were included in the final analysis. With DL-CAD assistance, the overall AUC of the less-experienced radiologists increased significantly from 0.66 to 0.80 (p = 0.001; cutoff ISUP GG ≥ 1) and from 0.68 to 0.80 (p = 0.002; cutoff ISUP GG ≥ 2). Experienced radiologists showed an AUC increase from 0.81 to 0.86 (p = 0.146; cutoff ISUP GG ≥ 1) and from 0.81 to 0.84 (p = 0.433; cutoff ISUP GG ≥ 2). Furthermore, the correlation between PI-RADS category and GS improved significantly in the DL-CAD + group (0.45 vs. 0.57; p = 0.03), while the median reading time was reduced from 157 to 150 s (p = 0.023). CONCLUSIONS: DL-CAD assistance increased the mean detection performance, with the most significant benefit for the less-experienced radiologist; with the help of DL-CAD less-experienced radiologists reached performances comparable to that of experienced radiologists. KEY POINTS: • DL-CAD used as a concurrent reading aid helps radiologists to distinguish between benign and cancerous lesions in prostate MRI. • With the help of DL-CAD, less-experienced radiologists may achieve detection performances comparable to that of experienced radiologists. • DL-CAD assistance increases the correlation between PI-RADS category and cancer grade.


Asunto(s)
Aprendizaje Profundo , Imágenes de Resonancia Magnética Multiparamétrica , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Próstata/patología , Imagen por Resonancia Magnética , Estudios Retrospectivos , Neoplasias de la Próstata/patología , Clasificación del Tumor , Biopsia Guiada por Imagen , Radiólogos , Computadores
9.
Eur J Radiol ; 142: 109894, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34388625

RESUMEN

PURPOSE: To compare the performance of lesion detection and Prostate Imaging-Reporting and Data System (PI-RADS) classification between a deep learning-based algorithm (DLA), clinical reports and radiologists with different levels of experience in prostate MRI. METHODS: This retrospective study included 121 patients who underwent prebiopsy MRI and prostate biopsy. More than five radiologists (Reader groups 1, 2: residents; Readers 3, 4: less-experienced radiologists; Reader 5: expert) independently reviewed biparametric MRI (bpMRI). The DLA results were obtained using bpMRI. The reference standard was based on pathologic reports. The diagnostic performance of the PI-RADS classification of DLA, clinical reports, and radiologists was analyzed using AUROC. Dichotomous analysis (PI-RADS cutoff value ≥ 3 or 4) was performed, and the sensitivities and specificities were compared using McNemar's test. RESULTS: Clinically significant cancer [CSC, Gleason score ≥ 7] was confirmed in 43 patients (35.5%). The AUROC of the DLA (0.828) for diagnosing CSC was significantly higher than that of Reader 1 (AUROC, 0.706; p = 0.011), significantly lower than that of Reader 5 (AUROC, 0.914; p = 0.013), and similar to clinical reports and other readers (p = 0.060-0.661). The sensitivity of DLA (76.7%) was comparable to those of all readers and the clinical reports at a PI-RADS cutoff value ≥ 4. The specificity of the DLA (85.9%) was significantly higher than those of clinical reports and Readers 2-3 and comparable to all others at a PI-RADS cutoff value ≥ 4. CONCLUSIONS: The DLA showed moderate diagnostic performance at a level between those of residents and an expert in detecting and classifying according to PI-RADS. The performance of DLA was similar to that of clinical reports from various radiologists in clinical practice.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Humanos , Imagen por Resonancia Magnética , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Radiólogos , Estudios Retrospectivos
10.
Invest Radiol ; 56(10): 605-613, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33787537

RESUMEN

OBJECTIVE: The aim of this study was to evaluate the effect of a deep learning based computer-aided diagnosis (DL-CAD) system on radiologists' interpretation accuracy and efficiency in reading biparametric prostate magnetic resonance imaging scans. MATERIALS AND METHODS: We selected 100 consecutive prostate magnetic resonance imaging cases from a publicly available data set (PROSTATEx Challenge) with and without histopathologically confirmed prostate cancer. Seven board-certified radiologists were tasked to read each case twice in 2 reading blocks (with and without the assistance of a DL-CAD), with a separation between the 2 reading sessions of at least 2 weeks. Reading tasks were to localize and classify lesions according to Prostate Imaging Reporting and Data System (PI-RADS) v2.0 and to assign a radiologist's level of suspicion score (scale from 1-5 in 0.5 increments; 1, benign; 5, malignant). Ground truth was established by consensus readings of 3 experienced radiologists. The detection performance (receiver operating characteristic curves), variability (Fleiss κ), and average reading time without DL-CAD assistance were evaluated. RESULTS: The average accuracy of radiologists in terms of area under the curve in detecting clinically significant cases (PI-RADS ≥4) was 0.84 (95% confidence interval [CI], 0.79-0.89), whereas the same using DL-CAD was 0.88 (95% CI, 0.83-0.94) with an improvement of 4.4% (95% CI, 1.1%-7.7%; P = 0.010). Interreader concordance (in terms of Fleiss κ) increased from 0.22 to 0.36 (P = 0.003). Accuracy of radiologists in detecting cases with PI-RADS ≥3 was improved by 2.9% (P = 0.10). The median reading time in the unaided/aided scenario was reduced by 21% from 103 to 81 seconds (P < 0.001). CONCLUSIONS: Using a DL-CAD system increased the diagnostic accuracy in detecting highly suspicious prostate lesions and reduced both the interreader variability and the reading time.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Computadores , Humanos , Imagen por Resonancia Magnética , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Radiólogos , Estudios Retrospectivos
11.
Sci Rep ; 11(1): 6876, 2021 03 25.
Artículo en Inglés | MEDLINE | ID: mdl-33767226

RESUMEN

With the rapid growth and increasing use of brain MRI, there is an interest in automated image classification to aid human interpretation and improve workflow. We aimed to train a deep convolutional neural network and assess its performance in identifying abnormal brain MRIs and critical intracranial findings including acute infarction, acute hemorrhage and mass effect. A total of 13,215 clinical brain MRI studies were categorized to training (74%), validation (9%), internal testing (8%) and external testing (8%) datasets. Up to eight contrasts were included from each brain MRI and each image volume was reformatted to common resolution to accommodate for differences between scanners. Following reviewing the radiology reports, three neuroradiologists assigned each study to abnormal vs normal, and identified three critical findings including acute infarction, acute hemorrhage, and mass effect. A deep convolutional neural network was constructed by a combination of localization feature extraction (LFE) modules and global classifiers to identify the presence of 4 variables in brain MRIs including abnormal, acute infarction, acute hemorrhage and mass effect. Training, validation and testing sets were randomly defined on a patient basis. Training was performed on 9845 studies using balanced sampling to address class imbalance. Receiver operating characteristic (ROC) analysis was performed. The ROC analysis of our models for 1050 studies within our internal test data showed AUC/sensitivity/specificity of 0.91/83%/86% for normal versus abnormal brain MRI, 0.95/92%/88% for acute infarction, 0.90/89%/81% for acute hemorrhage, and 0.93/93%/85% for mass effect. For 1072 studies within our external test data, it showed AUC/sensitivity/specificity of 0.88/80%/80% for normal versus abnormal brain MRI, 0.97/90%/97% for acute infarction, 0.83/72%/88% for acute hemorrhage, and 0.87/79%/81% for mass effect. Our proposed deep convolutional network can accurately identify abnormal and critical intracranial findings on individual brain MRIs, while addressing the fact that some MR contrasts might not be available in individual studies.


Asunto(s)
Encéfalo/anatomía & histología , Aprendizaje Profundo , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Redes Neurales de la Computación , Neuroimagen/métodos , Humanos , Curva ROC
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA