Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Radiol Artif Intell ; 6(3): e230318, 2024 May.
Article in English | MEDLINE | ID: mdl-38568095

ABSTRACT

Purpose To develop an artificial intelligence (AI) model for the diagnosis of breast cancer on digital breast tomosynthesis (DBT) images and to investigate whether it could improve diagnostic accuracy and reduce radiologist reading time. Materials and Methods A deep learning AI algorithm was developed and validated for DBT with retrospectively collected examinations (January 2010 to December 2021) from 14 institutions in the United States and South Korea. A multicenter reader study was performed to compare the performance of 15 radiologists (seven breast specialists, eight general radiologists) in interpreting DBT examinations in 258 women (mean age, 56 years ± 13.41 [SD]), including 65 cancer cases, with and without the use of AI. Area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and reading time were evaluated. Results The AUC for stand-alone AI performance was 0.93 (95% CI: 0.92, 0.94). With AI, radiologists' AUC improved from 0.90 (95% CI: 0.86, 0.93) to 0.92 (95% CI: 0.88, 0.96) (P = .003) in the reader study. AI showed higher specificity (89.64% [95% CI: 85.34%, 93.94%]) than radiologists (77.34% [95% CI: 75.82%, 78.87%]) (P < .001). When reading with AI, radiologists' sensitivity increased from 85.44% (95% CI: 83.22%, 87.65%) to 87.69% (95% CI: 85.63%, 89.75%) (P = .04), with no evidence of a difference in specificity. Reading time decreased from 54.41 seconds (95% CI: 52.56, 56.27) without AI to 48.52 seconds (95% CI: 46.79, 50.25) with AI (P < .001). Interreader agreement measured by Fleiss κ increased from 0.59 to 0.62. Conclusion The AI model showed better diagnostic accuracy than radiologists in breast cancer detection, as well as reduced reading times. The concurrent use of AI in DBT interpretation could improve both accuracy and efficiency. Keywords: Breast, Computer-Aided Diagnosis (CAD), Tomosynthesis, Artificial Intelligence, Digital Breast Tomosynthesis, Breast Cancer, Computer-Aided Detection, Screening Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Bae in this issue.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Mammography , Sensitivity and Specificity , Humans , Female , Breast Neoplasms/diagnostic imaging , Middle Aged , Mammography/methods , Retrospective Studies , Radiographic Image Interpretation, Computer-Assisted/methods , Republic of Korea/epidemiology , Deep Learning , Adult , Time Factors , Algorithms , United States , Reproducibility of Results
2.
Radiol Artif Intell ; 5(3): e220159, 2023 May.
Article in English | MEDLINE | ID: mdl-37293346

ABSTRACT

Purpose: To develop an efficient deep neural network model that incorporates context from neighboring image sections to detect breast cancer on digital breast tomosynthesis (DBT) images. Materials and Methods: The authors adopted a transformer architecture that analyzes neighboring sections of the DBT stack. The proposed method was compared with two baselines: an architecture based on three-dimensional (3D) convolutions and a two-dimensional model that analyzes each section individually. The models were trained with 5174 four-view DBT studies, validated with 1000 four-view DBT studies, and tested on 655 four-view DBT studies, which were retrospectively collected from nine institutions in the United States through an external entity. Methods were compared using area under the receiver operating characteristic curve (AUC), sensitivity at a fixed specificity, and specificity at a fixed sensitivity. Results: On the test set of 655 DBT studies, both 3D models showed higher classification performance than did the per-section baseline model. The proposed transformer-based model showed a significant increase in AUC (0.88 vs 0.91, P = .002), sensitivity (81.0% vs 87.7%, P = .006), and specificity (80.5% vs 86.4%, P < .001) at clinically relevant operating points when compared with the single-DBT-section baseline. The transformer-based model used only 25% of the number of floating-point operations per second used by the 3D convolution model while demonstrating similar classification performance. Conclusion: A transformer-based deep neural network using data from neighboring sections improved breast cancer classification performance compared with a per-section baseline model and was more efficient than a model using 3D convolutions.Keywords: Breast, Tomosynthesis, Diagnosis, Supervised Learning, Convolutional Neural Network (CNN), Digital Breast Tomosynthesis, Breast Cancer, Deep Neural Networks, Transformers Supplemental material is available for this article. © RSNA, 2023.

SELECTION OF CITATIONS
SEARCH DETAIL
...