Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Diseases ; 12(2)2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38391782

RESUMO

BACKGROUND: Automated rhythm detection on echocardiography through artificial intelligence (AI) has yet to be fully realized. We propose an AI model trained to identify atrial fibrillation (AF) using apical 4-chamber (AP4) cines without requiring electrocardiogram (ECG) data. METHODS: Transthoracic echocardiography studies of consecutive patients ≥ 18 years old at our tertiary care centre were retrospectively reviewed for AF and sinus rhythm. The study was first interpreted by level III-trained echocardiography cardiologists as the gold standard for rhythm diagnosis based on ECG rhythm strip and imaging assessment, which was also verified with a 12-lead ECG around the time of the study. AP4 cines with three cardiac cycles were then extracted from these studies with the rhythm strip and Doppler information removed and introduced to the deep learning model ResNet(2+1)D with an 80:10:10 training-validation-test split ratio. RESULTS: 634 patient studies (1205 cines) were included. After training, the AI model achieved high accuracy on validation for detection of both AF and sinus rhythm (mean F1-score = 0.92; AUROC = 0.95). Performance was consistent on the test dataset (mean F1-score = 0.94, AUROC = 0.98) when using the cardiologist's assessment of the ECG rhythm strip as the gold standard, who had access to the full study and external ECG data, while the AI model did not. CONCLUSIONS: AF detection by AI on echocardiography without ECG appears accurate when compared to an echocardiography cardiologist's assessment of the ECG rhythm strip as the gold standard. This has potential clinical implications in point-of-care ultrasound and stroke risk stratification.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39126604

RESUMO

Left ventricular (LV) geometric patterns aid clinicians in the diagnosis and prognostication of various cardiomyopathies. The aim of this study is to assess the accuracy and reproducibility of LV dimensions and wall thickness using deep learning (DL) models. A total of 30,080 unique studies were included; 24,013 studies were used to train a convolutional neural network model to automatically assess, at end-diastole, LV internal diameter (LVID), interventricular septal wall thickness (IVS), posterior wall thickness (PWT), and LV mass. The model was trained to select end-diastolic frames with the largest LVID and to identify four landmarks, marking the dimensions of LVID, IVS, and PWT using manually labeled landmarks as reference. The model was validated with 3,014 echocardiographic cines and the accuracy of the model was evaluated with a test set of 3,053 echocardiographic cines. The model accurately measured LVID, IVS, PWT, and LV mass compared to study report values with a mean relative error of 5.40%, 11.73%, 12.76%, and 13.93%, respectively. The 𝑅2 of the model for the LVID, IVS, PWT, and the LV mass was 0.88, 0.63, 0.50, and 0.87, respectively. The novel DL model developed in this study was accurate for LV dimension assessment without the need to select end-diastolic frames manually. DL automated measurements of IVS and PWT were less accurate with greater wall thickness. Validation studies in larger and more diverse populations are ongoing.

3.
Echo Res Pract ; 11(1): 9, 2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38539236

RESUMO

BACKGROUND: Machine learning (ML) algorithms can accurately estimate left ventricular ejection fraction (LVEF) from echocardiography, but their performance on cardiac point-of-care ultrasound (POCUS) is not well understood. OBJECTIVES: We evaluate the performance of an ML model for estimation of LVEF on cardiac POCUS compared with Level III echocardiographers' interpretation and formal echo reported LVEF. METHODS: Clinicians at a tertiary care heart failure clinic prospectively scanned 138 participants using hand-carried devices. Video data were analyzed offline by an ML model for LVEF. We compared the ML model's performance with Level III echocardiographers' interpretation and echo reported LVEF. RESULTS: There were 138 participants scanned, yielding 1257 videos. The ML model generated LVEF predictions on 341 videos. We observed a good intraclass correlation (ICC) between the ML model's predictions and the reference standards (ICC = 0.77-0.84). When comparing LVEF estimates for randomized single POCUS videos, the ICC between the ML model and Level III echocardiographers' estimates was 0.772, and it was 0.778 for videos where quantitative LVEF was feasible. When the Level III echocardiographer reviewed all POCUS videos for a participant, the ICC improved to 0.794 and 0.843 when only accounting for studies that could be segmented. The ML model's LVEF estimates also correlated well with LVEF derived from formal echocardiogram reports (ICC = 0.798). CONCLUSION: Our results suggest that clinician-driven cardiac POCUS produces ML model LVEF estimates that correlate well with expert interpretation and echo reported LVEF.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa