Your browser doesn't support javascript.
loading
Ensemble Deep Learning for Cervix Image Selection toward Improving Reliability in Automated Cervical Precancer Screening.
Guo, Peng; Xue, Zhiyun; Mtema, Zac; Yeates, Karen; Ginsburg, Ophira; Demarco, Maria; Long, L Rodney; Schiffman, Mark; Antani, Sameer.
Affiliation
  • Guo P; Communications Engineering Branch, Lister Hill National Center for Biomedical Communications, U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA.
  • Xue Z; Communications Engineering Branch, Lister Hill National Center for Biomedical Communications, U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA.
  • Mtema Z; Ifakara Health Institute, P.O. Box 53 Ifakara, Tanzania.
  • Yeates K; Department of Medicine, Queen's University, Kingston, ON K7L3N6, Canada.
  • Ginsburg O; School of Global Public Health, New York University, New York, NY 10012, USA.
  • Demarco M; Pamoja Tunaweza Research Centre, Moshi, Tanzania.
  • Long LR; Perlmutter Cancer Center, NYU Langone Health, New York, NY 10016, USA.
  • Schiffman M; Division of Cancer Epidemiology and Genetics (DCEG), National Cancer Institutes, Bethesda, MD 20892, USA.
  • Antani S; Communications Engineering Branch, Lister Hill National Center for Biomedical Communications, U.S. National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA.
Diagnostics (Basel) ; 10(7)2020 Jul 03.
Article in En | MEDLINE | ID: mdl-32635269
ABSTRACT
Automated Visual Examination (AVE) is a deep learning algorithm that aims to improve the effectiveness of cervical precancer screening, particularly in low- and medium-resource regions. It was trained on data from a large longitudinal study conducted by the National Cancer Institute (NCI) and has been shown to accurately identify cervices with early stages of cervical neoplasia for clinical evaluation and treatment. The algorithm processes images of the uterine cervix taken with a digital camera and alerts the user if the woman is a candidate for further evaluation. This requires that the algorithm be presented with images of the cervix, which is the object of interest, of acceptable quality, i.e., in sharp focus, with good illumination, without shadows or other occlusions, and showing the entire squamo-columnar transformation zone. Our prior work has addressed some of these constraints to help discard images that do not meet these criteria. In this work, we present a novel algorithm that determines that the image contains the cervix to a sufficient extent. Non-cervix or other inadequate images could lead to suboptimal or wrong results. Manual removal of such images is labor intensive and time-consuming, particularly in working with large retrospective collections acquired with inadequate quality control. In this work, we present a novel ensemble deep learning method to identify cervix images and non-cervix images in a smartphone-acquired cervical image dataset. The ensemble method combined the assessment of three deep learning architectures, RetinaNet, Deep SVDD, and a customized CNN (Convolutional Neural Network), each using a different strategy to arrive at its decision, i.e., object detection, one-class classification, and binary classification. We examined the performance of each individual architecture and an ensemble of all three architectures. An average accuracy and F-1 score of 91.6% and 0.890, respectively, were achieved on a separate test dataset consisting of more than 30,000 smartphone-captured images.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies / Guideline / Observational_studies / Prognostic_studies / Screening_studies Language: En Journal: Diagnostics (Basel) Year: 2020 Type: Article Affiliation country: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies / Guideline / Observational_studies / Prognostic_studies / Screening_studies Language: En Journal: Diagnostics (Basel) Year: 2020 Type: Article Affiliation country: United States