Your browser doesn't support javascript.
loading
Making Artificial Intelligence Lemonade Out of Data Lemons: Adaptation of a Public Apical Echo Database for Creation of a Subxiphoid Visual Estimation Automatic Ejection Fraction Machine Learning Algorithm.
Blaivas, Michael; Blaivas, Laura N; Campbell, Kendra; Thomas, Joseph; Shah, Sonia; Yadav, Kabir; Liu, Yiju Teresa.
Affiliation
  • Blaivas M; Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA.
  • Blaivas LN; Department of Emergency Medicine, St. Francis Hospital, Columbus, GA, USA.
  • Campbell K; Michigan State University, East Lansing, MI, USA.
  • Thomas J; Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, CA, USA.
  • Shah S; Department of Cardiology, Harbor-UCLA Medical Center, Torrance, CA, USA.
  • Yadav K; David Geffen School of Medicine at UCLA, Los Angeles, CA, USA.
  • Liu YT; Department of Cardiology, Harbor-UCLA Medical Center, Torrance, CA, USA.
J Ultrasound Med ; 41(8): 2059-2069, 2022 Aug.
Article in En | MEDLINE | ID: mdl-34820867
ABSTRACT

OBJECTIVES:

A paucity of point-of-care ultrasound (POCUS) databases limits machine learning (ML). Assess feasibility of training ML algorithms to visually estimate left ventricular ejection fraction (EF) from a subxiphoid (SX) window using only apical 4-chamber (A4C) images.

METHODS:

Researchers used a long-short-term-memory algorithm for image analysis. Using the Stanford EchoNet-Dynamic database of 10,036 A4C videos with calculated exact EF, researchers tested 3 ML training permeations. First, training on unaltered Stanford A4C videos, then unaltered and 90° clockwise (CW) rotated videos and finally unaltered, 90° rotated and horizontally flipped videos. As a real-world test, we obtained 615 SX videos from Harbor-UCLA (HUCLA) with EF calculations in 5% ranges. Researchers performed 1000 randomizations of EF point estimation within HUCLA EF ranges to compensate for ML and HUCLA EF mismatch, obtaining a mean value for absolute error (MAE) comparison and performed Bland-Altman analyses.

RESULTS:

The ML algorithm EF mean MAE was estimated at 23.0, with a range of 22.8-23.3 using unaltered A4C video, mean MAE was 16.7, with a range of 16.5-16.9 using unaltered and 90° CW rotated video, mean MAE was 16.6, with a range of 16.3-16.8 using unaltered, 90° CW rotated and horizontally flipped video training. Bland-Altman showed weakest agreement at 40-45% EF.

CONCLUSIONS:

Researchers successfully adapted unrelated ultrasound window data to train a POCUS ML algorithm with fair MAE using data manipulation to simulate a different ultrasound examination. This may be important for future POCUS algorithm design to help overcome a paucity of POCUS databases.
Subject(s)
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Artificial Intelligence / Ventricular Function, Left Type of study: Clinical_trials / Prognostic_studies Limits: Humans Language: En Journal: J Ultrasound Med Year: 2022 Type: Article Affiliation country: United States

Full text: 1 Collection: 01-internacional Database: MEDLINE Main subject: Artificial Intelligence / Ventricular Function, Left Type of study: Clinical_trials / Prognostic_studies Limits: Humans Language: En Journal: J Ultrasound Med Year: 2022 Type: Article Affiliation country: United States