Your browser doesn't support javascript.
loading
Multibeat echocardiographic phase detection using deep neural networks.
Lane, Elisabeth S; Azarmehr, Neda; Jevsikov, Jevgeni; Howard, James P; Shun-Shin, Matthew J; Cole, Graham D; Francis, Darrel P; Zolgharni, Massoud.
Afiliación
  • Lane ES; School of Computing and Engineering, University of West London, London, United Kingdom. Electronic address: Elisabeth.Lane@uwl.ac.uk.
  • Azarmehr N; National Heart and Lung Institute, Imperial College, London, United Kingdom.
  • Jevsikov J; School of Computing and Engineering, University of West London, London, United Kingdom.
  • Howard JP; National Heart and Lung Institute, Imperial College, London, United Kingdom.
  • Shun-Shin MJ; National Heart and Lung Institute, Imperial College, London, United Kingdom.
  • Cole GD; National Heart and Lung Institute, Imperial College, London, United Kingdom.
  • Francis DP; National Heart and Lung Institute, Imperial College, London, United Kingdom.
  • Zolgharni M; School of Computing and Engineering, University of West London, London, United Kingdom; National Heart and Lung Institute, Imperial College, London, United Kingdom.
Comput Biol Med ; 133: 104373, 2021 06.
Article en En | MEDLINE | ID: mdl-33857775
BACKGROUND: Accurate identification of end-diastolic and end-systolic frames in echocardiographic cine loops is important, yet challenging, for human experts. Manual frame selection is subject to uncertainty, affecting crucial clinical measurements, such as myocardial strain. Therefore, the ability to automatically detect frames of interest is highly desirable. METHODS: We have developed deep neural networks, trained and tested on multi-centre patient data, for the accurate identification of end-diastolic and end-systolic frames in apical four-chamber 2D multibeat cine loop recordings of arbitrary length. Seven experienced cardiologist experts independently labelled the frames of interest, thereby providing infallible annotations, allowing for observer variability measurements. RESULTS: When compared with the ground-truth, our model shows an average frame difference of -0.09 ± 1.10 and 0.11 ± 1.29 frames for end-diastolic and end-systolic frames, respectively. When applied to patient datasets from a different clinical site, to which the model was blind during its development, average frame differences of -1.34 ± 3.27 and -0.31 ± 3.37 frames were obtained for both frames of interest. All detection errors fall within the range of inter-observer variability: [-0.87, -5.51]±[2.29, 4.26] and [-0.97, -3.46]±[3.67, 4.68] for ED and ES events, respectively. CONCLUSIONS: The proposed automated model can identify multiple end-systolic and end-diastolic frames in echocardiographic videos of arbitrary length with performance indistinguishable from that of human experts, but with significantly shorter processing time.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Ecocardiografía / Redes Neurales de la Computación Tipo de estudio: Diagnostic_studies / Guideline Límite: Humans Idioma: En Revista: Comput Biol Med Año: 2021 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Ecocardiografía / Redes Neurales de la Computación Tipo de estudio: Diagnostic_studies / Guideline Límite: Humans Idioma: En Revista: Comput Biol Med Año: 2021 Tipo del documento: Article Pais de publicación: Estados Unidos