Your browser doesn't support javascript.
loading
Deep Learning for Classification and Selection of Cine CMR Images to Achieve Fully Automated Quality-Controlled CMR Analysis From Scanner to Report.
Vergani, Vittoria; Razavi, Reza; Puyol-Antón, Esther; Ruijsink, Bram.
Afiliação
  • Vergani V; School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
  • Razavi R; School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
  • Puyol-Antón E; Department of Adult and Paediatric Cardiology, Guy's and St. Thomas' NHS Foundation Trust, London, United Kingdom.
  • Ruijsink B; School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
Front Cardiovasc Med ; 8: 742640, 2021.
Article em En | MEDLINE | ID: mdl-34722674
Introduction: Deep learning demonstrates great promise for automated analysis of CMR. However, existing limitations, such as insufficient quality control and selection of target acquisitions from the full CMR exam, are holding back the introduction of deep learning tools in the clinical environment. This study aimed to develop a framework for automated detection and quality-controlled selection of standard cine sequences images from clinical CMR exams, prior to analysis of cardiac function. Materials and Methods: Retrospective study of 3,827 subjects that underwent CMR imaging. We used a total of 119,285 CMR acquisitions, acquired with scanners of different magnetic field strengths and from different vendors (1.5T Siemens and 1.5T and 3.0T Phillips). We developed a framework to select one good acquisition for each conventional cine class. The framework consisted of a first pre-processing step to exclude still acquisitions; two sequential convolutional neural networks (CNN), the first (CNNclass) to classify acquisitions in standard cine views (2/3/4-chamber and short axis), the second (CNNQC) to classify acquisitions according to image quality and orientation; a final algorithm to select one good acquisition of each class. For each CNN component, 7 state-of-the-art architectures were trained for 200 epochs, with cross entropy loss and data augmentation. Data were divided into 80% for training, 10% for validation, and 10% for testing. Results: CNNclass selected cine CMR acquisitions with accuracy ranging from 0.989 to 0.998. Accuracy of CNNQC reached 0.861 for 2-chamber, 0.806 for 3-chamber, and 0.859 for 4-chamber. The complete framework was presented with 379 new full CMR studies, not used for CNN training/validation/testing, and selected one good 2-, 3-, and 4-chamber acquisition from each study with sensitivity to detect erroneous cases of 89.7, 93.2, and 93.9%, respectively. Conclusions: We developed an accurate quality-controlled framework for automated selection of cine acquisitions prior to image analysis. This framework is robust and generalizable as it was developed on multivendor data and could be used at the beginning of a pipeline for automated cine CMR analysis to obtain full automatization from scanner to report.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Observational_studies Idioma: En Ano de publicação: 2021 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Tipo de estudo: Observational_studies Idioma: En Ano de publicação: 2021 Tipo de documento: Article