Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-39152959

RESUMEN

BACKGROUND: Considering the high prevalence of mitral regurgitation (MR) and the highly subjective, variable MR severity reporting, an automated tool that could screen patients for clinically significant MR (≥ moderate) would streamline the diagnostic/therapeutic pathways and ultimately improve patient outcomes. OBJECTIVES: The authors aimed to develop and validate a fully automated machine learning (ML)-based echocardiography workflow for grading MR severity. METHODS: ML algorithms were trained on echocardiograms from 2 observational cohorts and validated in patients from 2 additional independent studies. Multiparametric echocardiography core laboratory MR assessment served as ground truth. The machine was trained to measure 16 MR-related parameters. Multiple ML models were developed to find the optimal parameters and preferred ML model for MR severity grading. RESULTS: The preferred ML model used 9 parameters. Image analysis was feasible in 99.3% of cases and took 80 ± 5 seconds per case. The accuracy for grading MR severity (none to severe) was 0.80, and for significant (moderate or severe) vs nonsignificant MR was 0.97 with a sensitivity of 0.96 and specificity of 0.98. The model performed similarly in cases of eccentric and central MR. Patients graded as having severe MR had higher 1-year mortality (adjusted HR: 5.20 [95% CI: 1.24-21.9]; P = 0.025 compared with mild). CONCLUSIONS: An automated multiparametric ML model for grading MR severity is feasible, fast, highly accurate, and predicts 1-year mortality. Its implementation in clinical practice could improve patient care by facilitating referral to specialized clinics and access to evidence-based therapies while improving quality and efficiency in the echocardiography laboratory.

2.
Eur Heart J Digit Health ; 5(1): 60-68, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38264705

RESUMEN

Aims: Echocardiographic strain imaging reflects myocardial deformation and is a sensitive measure of cardiac function and wall-motion abnormalities. Deep learning (DL) algorithms could automate the interpretation of echocardiographic strain imaging. Methods and results: We developed and trained an automated DL-based algorithm for left ventricular (LV) strain measurements in an internal dataset. Global longitudinal strain (GLS) was validated externally in (i) a real-world Taiwanese cohort of participants with and without heart failure (HF), (ii) a core-lab measured dataset from the multinational prevalence of microvascular dysfunction-HF and preserved ejection fraction (PROMIS-HFpEF) study, and regional strain in (iii) the HMC-QU-MI study of patients with suspected myocardial infarction. Outcomes included measures of agreement [bias, mean absolute difference (MAD), root-mean-squared-error (RMSE), and Pearson's correlation (R)] and area under the curve (AUC) to identify HF and regional wall-motion abnormalities. The DL workflow successfully analysed 3741 (89%) studies in the Taiwanese cohort, 176 (96%) in PROMIS-HFpEF, and 158 (98%) in HMC-QU-MI. Automated GLS showed good agreement with manual measurements (mean ± SD): -18.9 ± 4.5% vs. -18.2 ± 4.4%, respectively, bias 0.68 ± 2.52%, MAD 2.0 ± 1.67, RMSE = 2.61, R = 0.84 in the Taiwanese cohort; and -15.4 ± 4.1% vs. -15.9 ± 3.6%, respectively, bias -0.65 ± 2.71%, MAD 2.19 ± 1.71, RMSE = 2.78, R = 0.76 in PROMIS-HFpEF. In the Taiwanese cohort, automated GLS accurately identified patients with HF (AUC = 0.89 for total HF and AUC = 0.98 for HF with reduced ejection fraction). In HMC-QU-MI, automated regional strain identified regional wall-motion abnormalities with an average AUC = 0.80. Conclusion: DL algorithms can interpret echocardiographic strain images with similar accuracy as conventional measurements. These results highlight the potential of DL algorithms to democratize the use of cardiac strain measurements and reduce time-spent and costs for echo labs globally.

3.
J Am Soc Echocardiogr ; 36(7): 769-777, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-36958708

RESUMEN

BACKGROUND: Aortic stenosis (AS) is a common form of valvular heart disease, present in over 12% of the population age 75 years and above. Transthoracic echocardiography (TTE) is the first line of imaging in the adjudication of AS severity but is time-consuming and requires expert sonographic and interpretation capabilities to yield accurate results. Artificial intelligence (AI) technology has emerged as a useful tool to address these limitations but has not yet been applied in a fully hands-off manner to evaluate AS. Here, we correlate artificial neural network measurements of key hemodynamic AS parameters to experienced human reader assessment. METHODS: Two-dimensional and Doppler echocardiographic images from patients with normal aortic valves and all degrees of AS were analyzed by an artificial neural network (Us2.ai) with no human input to measure key variables in AS assessment. Trained echocardiographers blinded to AI data performed manual measurements of these variables, and correlation analyses were performed. RESULTS: Our cohort included 256 patients with an average age of 67.6 ± 9.5 years. Across all AS severities, AI closely matched human measurement of aortic valve peak velocity (r = 0.97, P < .001), mean pressure gradient (r = 0.94, P < .001), aortic valve area by continuity equation (r = 0.88, P < .001), stroke volume index (r = 0.79, P < .001), left ventricular outflow tract velocity-time integral (r = 0.89, P < .001), aortic valve velocity-time integral (r = 0.96, P < .001), and left ventricular outflow tract diameter (r = 0.76, P < .001). CONCLUSIONS: Artificial neural networks have the capacity to closely mimic human measurement of all relevant parameters in the adjudication of AS severity. Application of this AI technology may minimize interscan variability, improve interpretation and diagnosis of AS, and allow for precise and reproducible identification and management of patients with AS.


Asunto(s)
Estenosis de la Válvula Aórtica , Inteligencia Artificial , Humanos , Persona de Mediana Edad , Anciano , Estenosis de la Válvula Aórtica/diagnóstico por imagen , Ecocardiografía/métodos , Ecocardiografía Doppler , Válvula Aórtica/diagnóstico por imagen
4.
Lancet Digit Health ; 4(1): e46-e54, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34863649

RESUMEN

BACKGROUND: Echocardiography is the diagnostic modality for assessing cardiac systolic and diastolic function to diagnose and manage heart failure. However, manual interpretation of echocardiograms can be time consuming and subject to human error. Therefore, we developed a fully automated deep learning workflow to classify, segment, and annotate two-dimensional (2D) videos and Doppler modalities in echocardiograms. METHODS: We developed the workflow using a training dataset of 1145 echocardiograms and an internal test set of 406 echocardiograms from the prospective heart failure research platform (Asian Network for Translational Research and Cardiovascular Trials; ATTRaCT) in Asia, with previous manual tracings by expert sonographers. We validated the workflow against manual measurements in a curated dataset from Canada (Alberta Heart Failure Etiology and Analysis Research Team; HEART; n=1029 echocardiograms), a real-world dataset from Taiwan (n=31 241), the US-based EchoNet-Dynamic dataset (n=10 030), and in an independent prospective assessment of the Asian (ATTRaCT) and Canadian (Alberta HEART) datasets (n=142) with repeated independent measurements by two expert sonographers. FINDINGS: In the ATTRaCT test set, the automated workflow classified 2D videos and Doppler modalities with accuracies (number of correct predictions divided by the total number of predictions) ranging from 0·91 to 0·99. Segmentations of the left ventricle and left atrium were accurate, with a mean Dice similarity coefficient greater than 93% for all. In the external datasets (n=1029 to 10 030 echocardiograms used as input), automated measurements showed good agreement with locally measured values, with a mean absolute error range of 9-25 mL for left ventricular volumes, 6-10% for left ventricular ejection fraction (LVEF), and 1·8-2·2 for the ratio of the mitral inflow E wave to the tissue Doppler e' wave (E/e' ratio); and reliably classified systolic dysfunction (LVEF <40%, area under the receiver operating characteristic curve [AUC] range 0·90-0·92) and diastolic dysfunction (E/e' ratio ≥13, AUC range 0·91-0·91), with narrow 95% CIs for AUC values. Independent prospective evaluation confirmed less variance of automated compared with human expert measurements, with all individual equivalence coefficients being less than 0 for all measurements. INTERPRETATION: Deep learning algorithms can automatically annotate 2D videos and Doppler modalities with similar accuracy to manual measurements by expert sonographers. Use of an automated workflow might accelerate access, improve quality, and reduce costs in diagnosing and managing heart failure globally. FUNDING: A*STAR Biomedical Research Council and A*STAR Exploit Technologies.


Asunto(s)
Enfermedades Cardiovasculares/diagnóstico por imagen , Aprendizaje Profundo , Ecocardiografía/métodos , Corazón/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Estudios de Cohortes , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA