Your browser doesn't support javascript.
loading
[Evaluation of Radiograph Accuracy in Skull X-ray Images Using Deep Learning].
Mitsutake, Hideyoshi; Watanabe, Haruyuki; Sakaguchi, Aya; Uchiyama, Kiyoshi; Lee, Yongbum; Hayashi, Norio; Shimosegawa, Masayuki; Ogura, Toshihiro.
Affiliation
  • Mitsutake H; Department of Radiological Technology, Teikyo University Hospital.
  • Watanabe H; School of Radiological Technology, Gunma Prefectural College of Health Sciences.
  • Sakaguchi A; School of Radiological Technology, Gunma Prefectural College of Health Sciences (Current address: Department of Radiological Technology, Seikei-kai Chiba Medical Center).
  • Uchiyama K; Department of Radiological Technology, Teikyo University Hospital.
  • Lee Y; School of Health Sciences, Faculty of Medicine, Niigata University.
  • Hayashi N; School of Radiological Technology, Gunma Prefectural College of Health Sciences.
  • Shimosegawa M; School of Radiological Technology, Gunma Prefectural College of Health Sciences.
  • Ogura T; School of Radiological Technology, Gunma Prefectural College of Health Sciences.
Article in Ja | MEDLINE | ID: mdl-35046219
ABSTRACT

PURPOSE:

Accurate positioning is essential for radiography, and it is especially important to maintain image reproducibility in follow-up observations. The decision on re-taking radiographs is entrusting to the individual radiological technologist. The evaluation is a visual and qualitative evaluation and there are individual variations in the acceptance criteria. In this study, we propose a method of image evaluation using a deep convolutional neural network (DCNN) for skull X-ray images.

METHOD:

The radiographs were obtained from 5 skull phantoms and were classified by simple network and VGG16. The discrimination ability of DCNN was verified by recognizing the X-ray projection angle and the retake of the radiograph. DCNN architectures were used with the different input image sizes and were evaluated by 5-fold cross-validation and leave-one-out cross-validation.

RESULT:

Using the 5-fold cross-validation, the classification accuracy was 99.75% for the simple network and 80.00% for the VGG16 in small input image sizes, and when the input image size was general image size, simple network and VGG16 showed 79.58% and 80.00%, respectively.

CONCLUSION:

The experimental results showed that the combination between the small input image size, and the shallow DCNN architecture was suitable for the four-category classification in X-ray projection angles. The classification accuracy was up to 99.75%. The proposed method has the potential to automatically recognize the slight projection angles and the re-taking images to the acceptance criteria. It is considered that our proposed method can contribute to feedback for re-taking images and to reduce radiation dose due to individual subjectivity.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Deep Learning Type of study: Diagnostic_studies / Prognostic_studies / Qualitative_research Language: Ja Journal: Nihon Hoshasen Gijutsu Gakkai Zasshi Year: 2022 Document type: Article

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Deep Learning Type of study: Diagnostic_studies / Prognostic_studies / Qualitative_research Language: Ja Journal: Nihon Hoshasen Gijutsu Gakkai Zasshi Year: 2022 Document type: Article