Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Eur J Radiol ; 154: 110433, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35834858

ABSTRACT

PURPOSE: To evaluate visually and quantitatively the performance of a deep-learning-based super-resolution (SR) model for microcalcifications in digital mammography. METHOD: Mammograms were consecutively collected from 5080 patients who underwent breast cancer screening from January 2015 to March 2017. Of these, 93 patients (136 breasts, mean age, 50 ± 7 years) had microcalcifications in their breasts on mammograms. We applied an artificial intelligence model known as a fast SR convolutional neural network to the mammograms. SR and original mammograms were visually evaluated by four breast radiologists using a 5-point scale (1: original mammograms are strongly preferred, 5: SR mammograms are strongly preferred) for the detection, diagnostic quality, contrast, sharpness, and noise of microcalcifications. Mammograms were quantitatively evaluated using a perception-based image-quality evaluator (PIQE). RESULTS: All radiologists rated the SR mammograms better than the original ones in terms of detection, diagnostic quality, contrast, and sharpness of microcalcifications. These ratings were significantly different according to the Wilcoxon signed-rank test (p <.001), while the noise score of the three radiologists was significantly lower (p <.001). According to PIQE, SR mammograms were rated better than the original mammograms, showing a significant difference by paired t-test (p <.001). CONCLUSION: An SR model based on deep learning can improve the visibility of microcalcifications in mammography and help detect and diagnose them in mammograms.


Subject(s)
Breast Neoplasms , Calcinosis , Deep Learning , Adult , Artificial Intelligence , Breast Neoplasms/diagnostic imaging , Calcinosis/diagnostic imaging , Female , Humans , Mammography , Middle Aged , Reproducibility of Results
2.
Radiol Artif Intell ; 4(2): e210221, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35391769

ABSTRACT

Purpose: To develop an artificial intelligence-based model to detect mitral regurgitation on chest radiographs. Materials and Methods: This retrospective study included echocardiographs and associated chest radiographs consecutively collected at a single institution between July 2016 and May 2019. Associated radiographs were those obtained within 30 days of echocardiography. These radiographs were labeled as positive or negative for mitral regurgitation on the basis of the echocardiographic reports and were divided into training, validation, and test datasets. An artificial intelligence model was developed by using the training dataset and was tuned by using the validation dataset. To evaluate the model, the area under the curve, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value were assessed by using the test dataset. Results: This study included a total of 10 367 images from 5270 patients. The training dataset included 8240 images (4216 patients), the validation dataset included 1073 images (527 patients), and the test dataset included 1054 images (527 patients). The area under the curve, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value in the test dataset were 0.80 (95% CI: 0.77, 0.82), 71% (95% CI: 67, 75), 74% (95% CI: 70, 77), 73% (95% CI: 70, 75), 68% (95% CI: 64, 72), and 77% (95% CI: 73, 80), respectively. Conclusion: The developed deep learning-based artificial intelligence model may possibly differentiate patients with and without mitral regurgitation by using chest radiographs.Keywords: Computer-aided Diagnosis (CAD), Cardiac, Heart, Valves, Supervised Learning, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2022.

3.
PLoS One ; 17(3): e0265751, 2022.
Article in English | MEDLINE | ID: mdl-35324962

ABSTRACT

OBJECTIVES: The objective of this study was to develop and validate a state-of-the-art, deep learning (DL)-based model for detecting breast cancers on mammography. METHODS: Mammograms in a hospital development dataset, a hospital test dataset, and a clinic test dataset were retrospectively collected from January 2006 through December 2017 in Osaka City University Hospital and Medcity21 Clinic. The hospital development dataset and a publicly available digital database for screening mammography (DDSM) dataset were used to train and to validate the RetinaNet, one type of DL-based model, with five-fold cross-validation. The model's sensitivity and mean false positive indications per image (mFPI) and partial area under the curve (AUC) with 1.0 mFPI for both test datasets were externally assessed with the test datasets. RESULTS: The hospital development dataset, hospital test dataset, clinic test dataset, and DDSM development dataset included a total of 3179 images (1448 malignant images), 491 images (225 malignant images), 2821 images (37 malignant images), and 1457 malignant images, respectively. The proposed model detected all cancers with a 0.45-0.47 mFPI and had partial AUCs of 0.93 in both test datasets. CONCLUSIONS: The DL-based model developed for this study was able to detect all breast cancers with a very low mFPI. Our DL-based model achieved the highest performance to date, which might lead to improved diagnosis for breast cancer.


Subject(s)
Breast Neoplasms , Deep Learning , Breast Neoplasms/diagnostic imaging , Early Detection of Cancer , Female , Humans , Mammography/methods , Retrospective Studies
4.
Sci Rep ; 12(1): 727, 2022 01 14.
Article in English | MEDLINE | ID: mdl-35031654

ABSTRACT

We developed and validated a deep learning (DL)-based model using the segmentation method and assessed its ability to detect lung cancer on chest radiographs. Chest radiographs for use as a training dataset and a test dataset were collected separately from January 2006 to June 2018 at our hospital. The training dataset was used to train and validate the DL-based model with five-fold cross-validation. The model sensitivity and mean false positive indications per image (mFPI) were assessed with the independent test dataset. The training dataset included 629 radiographs with 652 nodules/masses and the test dataset included 151 radiographs with 159 nodules/masses. The DL-based model had a sensitivity of 0.73 with 0.13 mFPI in the test dataset. Sensitivity was lower in lung cancers that overlapped with blind spots such as pulmonary apices, pulmonary hila, chest wall, heart, and sub-diaphragmatic space (0.50-0.64) compared with those in non-overlapped locations (0.87). The dice coefficient for the 159 malignant lesions was on average 0.52. The DL-based model was able to detect lung cancers on chest radiographs, with low mFPI.


Subject(s)
Algorithms , Deep Learning , Lung Neoplasms/diagnostic imaging , Radiography, Thoracic/methods , Solitary Pulmonary Nodule/diagnostic imaging , Adult , Aged , Aged, 80 and over , Datasets as Topic , Female , Humans , Male , Middle Aged , Neural Networks, Computer , Retrospective Studies , Sensitivity and Specificity
5.
Eur Heart J Digit Health ; 3(1): 20-28, 2022 Mar.
Article in English | MEDLINE | ID: mdl-36713993

ABSTRACT

Aims: We aimed to develop models to detect aortic stenosis (AS) from chest radiographs-one of the most basic imaging tests-with artificial intelligence. Methods and results: We used 10 433 retrospectively collected digital chest radiographs from 5638 patients to train, validate, and test three deep learning models. Chest radiographs were collected from patients who had also undergone echocardiography at a single institution between July 2016 and May 2019. These were labelled from the corresponding echocardiography assessments as AS-positive or AS-negative. The radiographs were separated on a patient basis into training [8327 images from 4512 patients, mean age 65 ± (standard deviation) 15 years], validation (1041 images from 563 patients, mean age 65 ± 14 years), and test (1065 images from 563 patients, mean age 65 ± 14 years) datasets. The soft voting-based ensemble of the three developed models had the best overall performance for predicting AS with an area under the receiver operating characteristic curve, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of 0.83 (95% confidence interval 0.77-0.88), 0.78 (0.67-0.86), 0.71 (0.68-0.73), 0.71 (0.68-0.74), 0.18 (0.14-0.23), and 0.97 (0.96-0.98), respectively, in the validation dataset and 0.83 (0.78-0.88), 0.83 (0.74-0.90), 0.69 (0.66-0.72), 0.71 (0.68-0.73), 0.23 (0.19-0.28), and 0.97 (0.96-0.98), respectively, in the test dataset. Conclusion: Deep learning models using chest radiographs have the potential to differentiate between radiographs of patients with and without AS. Lay Summary: We created artificial intelligence (AI) models using deep learning to identify aortic stenosis (AS) from chest radiographs. Three AI models were developed and evaluated with 10 433 retrospectively collected radiographs and labelled from echocardiography reports. The ensemble AI model could detect AS in a test dataset with an area under the receiver operating characteristic curve of 0.83 (95% confidence interval 0.78-0.88). Since chest radiography is a cost-effective and widely available imaging test, our model can provide an additive resource for the detection of AS.

6.
BMC Cancer ; 21(1): 1120, 2021 Oct 18.
Article in English | MEDLINE | ID: mdl-34663260

ABSTRACT

BACKGROUND: We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. METHODS: Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers' assessments were calculated. RESULTS: In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader's sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14-1.30), 1.00 (1.00-1.01), 1.03 (1.02-1.04), 1.07 (1.03-1.11), and 1.02 (1.01-1.03) by using the CAD, respectively. CONCLUSION: The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment.


Subject(s)
Lung Neoplasms/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Solitary Pulmonary Nodule/diagnostic imaging , Adult , Aged , Aged, 80 and over , Deep Learning , Female , General Practitioners , Humans , Lung/diagnostic imaging , Male , Middle Aged , Radiologists , Retrospective Studies , Sensitivity and Specificity
7.
Radiology ; 299(3): 675-681, 2021 06.
Article in English | MEDLINE | ID: mdl-33787336

ABSTRACT

Background Digital subtraction angiography (DSA) generates an image by subtracting a mask image from a dynamic angiogram. However, patient movement-caused misregistration artifacts can result in unclear DSA images that interrupt procedures. Purpose To train and to validate a deep learning (DL)-based model to produce DSA-like cerebral angiograms directly from dynamic angiograms and then quantitatively and visually evaluate these angiograms for clinical usefulness. Materials and Methods A retrospective model development and validation study was conducted on dynamic and DSA image pairs consecutively collected from January 2019 through April 2019. Angiograms showing misregistration were first separated per patient by two radiologists and sorted into the misregistration test data set. Nonmisregistration angiograms were divided into development and external test data sets at a ratio of 8:1 per patient. The development data set was divided into training and validation data sets at ratio of 3:1 per patient. The DL model was created by using the training data set, tuned with the validation data set, and then evaluated quantitatively with the external test data set and visually with the misregistration test data set. Quantitative evaluations used the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) with mixed liner models. Visual evaluation was conducted by using a numerical rating scale. Results The training, validation, nonmisregistration test, and misregistration test data sets included 10 751, 2784, 1346, and 711 paired images collected from 40 patients (mean age, 62 years ± 11 [standard deviation]; 33 women). In the quantitative evaluation, DL-generated angiograms showed a mean PSNR value of 40.2 dB ± 4.05 and a mean SSIM value of 0.97 ± 0.02, indicating high coincidence with the paired DSA images. In the visual evaluation, the median ratings of the DL-generated angiograms were similar to or better than those of the original DSA images for all 24 sequences. Conclusion The deep learning-based model provided clinically useful cerebral angiograms free from clinically significant artifacts directly from dynamic angiograms. Published under a CC BY 4.0 license. Supplemental material is available for this article.


Subject(s)
Cerebral Angiography , Deep Learning , Image Enhancement/methods , Adult , Aged , Aged, 80 and over , Angiography, Digital Subtraction , Artifacts , Female , Humans , Image Processing, Computer-Assisted/methods , Male , Middle Aged , Retrospective Studies , Signal-To-Noise Ratio
8.
JCO Precis Oncol ; 5: 543-551, 2021 11.
Article in English | MEDLINE | ID: mdl-34994603

ABSTRACT

PURPOSE: The molecular subtype of breast cancer is an important component of establishing the appropriate treatment strategy. In clinical practice, molecular subtypes are determined by receptor expressions. In this study, we developed a model using deep learning to determine receptor expressions from mammograms. METHODS: A developing data set and a test data set were generated from mammograms from the affected side of patients who were pathologically diagnosed with breast cancer from January 2006 through December 2016 and from January 2017 through December 2017, respectively. The developing data sets were used to train and validate the DL-based model with five-fold cross-validation for classifying expression of estrogen receptor (ER), progesterone receptor (PgR), and human epidermal growth factor receptor 2-neu (HER2). The area under the curves (AUCs) for each receptor were evaluated with the independent test data set. RESULTS: The developing data set and the test data set included 1,448 images (997 ER-positive and 386 ER-negative, 641 PgR-positive and 695 PgR-negative, and 220 HER2-enriched and 1,109 non-HER2-enriched) and 225 images (176 ER-positive and 40 ER-negative, 101 PgR-positive and 117 PgR-negative, and 53 HER2-enriched and 165 non-HER2-enriched), respectively. The AUC of ER-positive or -negative in the test data set was 0.67 (0.58-0.76), the AUC of PgR-positive or -negative was 0.61 (0.53-0.68), and the AUC of HER2-enriched or non-HER2-enriched was 0.75 (0.68-0.82). CONCLUSION: The DL-based model effectively classified the receptor expressions from the mammograms. Applying the DL-based model to predict breast cancer classification with a noninvasive approach would have additive value to patients.


Subject(s)
Breast Neoplasms/diagnosis , Deep Learning , Receptor, ErbB-2/metabolism , Receptors, Estrogen/metabolism , Receptors, Progesterone/metabolism , Aged , Datasets as Topic , Female , Gene Expression , Humans , Mammography , Middle Aged , Models, Biological
9.
Jpn J Radiol ; 39(4): 333-340, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33200356

ABSTRACT

PURPOSE: To demonstrate how artificial intelligence (AI) can expand radiologists' capacity, we visualized the features of invasive ductal carcinomas (IDCs) that our algorithm, developed and validated for basic pathological classification on mammograms, had focused on. MATERIALS AND METHODS: IDC datasets were built using mammograms from patients diagnosed with IDCs from January 2006 to December 2017. The developing dataset was used to train and validate a VGG-16 deep learning (DL) network. The true positives (TPs) and accuracy of the algorithm were externally evaluated using the test dataset. A visualization technique was applied to the algorithm to determine which malignant findings on mammograms were revealed. RESULTS: The datasets were split into a developing dataset (988 images) and a test dataset (131 images). The proposed algorithm diagnosed 62 TPs with an accuracy of 0.61-0.70. The visualization of features on the mammograms revealed that the tubule forming, solid, and scirrhous types of IDCs exhibited visible features on the surroundings, corners of the masses, and architectural distortions, respectively. CONCLUSION: We successfully showed that features isolated by a DL-based algorithm trained to classify IDCs were indeed those known to be associated with each pathology. Thus, using AI can expand the capacity of radiologists through the discovery of previously unknown findings.


Subject(s)
Algorithms , Breast Neoplasms/diagnostic imaging , Carcinoma, Ductal, Breast/diagnostic imaging , Deep Learning , Mammography/methods , Adult , Aged , Aged, 80 and over , Breast Neoplasms/classification , Breast Neoplasms/pathology , Carcinoma, Ductal, Breast/classification , Carcinoma, Ductal, Breast/pathology , Female , Humans , Middle Aged
10.
Jpn J Radiol ; 37(1): 15-33, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30506448

ABSTRACT

Deep learning has been applied to clinical applications in not only radiology, but also all other areas of medicine. This review provides a technical and clinical overview of deep learning in radiology. To gain a more practical understanding of deep learning, deep learning techniques are divided into five categories: classification, object detection, semantic segmentation, image processing, and natural language processing. After a brief overview of technical network evolutions, clinical applications based on deep learning are introduced. The clinical applications are then summarized to reveal the features of deep learning, which are highly dependent on training and test datasets. The core technology in deep learning is developed by image classification tasks. In the medical field, radiologists are specialists in such tasks. Using clinical applications based on deep learning would, therefore, be expected to contribute to substantial improvements in radiology. By gaining a better understanding of the features of deep learning, radiologists could be expected to lead medical development.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Radiology/methods , Humans
11.
Radiology ; 290(1): 187-194, 2019 01.
Article in English | MEDLINE | ID: mdl-30351253

ABSTRACT

Purpose To develop and evaluate a supportive algorithm using deep learning for detecting cerebral aneurysms at time-of-flight MR angiography to provide a second assessment of images already interpreted by radiologists. Materials and Methods MR images reported by radiologists to contain aneurysms were extracted from four institutions for the period from November 2006 through October 2017. The images were divided into three data sets: training data set, internal test data set, and external test data set. The algorithm was constructed by deep learning with the training data set, and its sensitivity to detect aneurysms in the test data sets was evaluated. To find aneurysms that had been overlooked in the initial reports, two radiologists independently performed a blinded interpretation of aneurysm candidates detected by the algorithm. When there was disagreement, the final diagnosis was made in consensus. The number of newly detected aneurysms was also evaluated. Results The training data set, which provided training and validation data, included 748 aneurysms (mean size, 3.1 mm ± 2.0 [standard deviation]) from 683 examinations; 318 of these examinations were on male patients (mean age, 63 years ± 13) and 365 were on female patients (mean age, 64 years ± 13). Test data were provided by the internal test data set (649 aneurysms [mean size, 4.1 mm ± 3.2] in 521 examinations, including 177 male patients and 344 female patients with mean age of 66 years ± 12 and 67 years ± 13, respectively) and the external test data set (80 aneurysms [mean size, 4.1 mm ± 2.1] in 67 examinations, including 19 male patients and 48 female patients with mean age of 63 years ± 12 and 68 years ± 12, respectively). The sensitivity was 91% (592 of 649) and 93% (74 of 80) for the internal and external test data sets, respectively. The algorithm improved aneurysm detection in the internal and external test data sets by 4.8% (31 of 649) and 13% (10 of 80), respectively, compared with the initial reports. Conclusion A deep learning algorithm detected cerebral aneurysms in radiologic reports with high sensitivity and improved aneurysm detection compared with the initial reports. © RSNA, 2018 See also the editorial by Flanders in this issue.


Subject(s)
Deep Learning , Image Interpretation, Computer-Assisted/methods , Intracranial Aneurysm/diagnostic imaging , Magnetic Resonance Angiography/methods , Aged , Algorithms , Brain/diagnostic imaging , Female , Humans , Male , Middle Aged , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...