Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Echo Res Pract ; 11(1): 9, 2024 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-38539236

RESUMEN

BACKGROUND: Machine learning (ML) algorithms can accurately estimate left ventricular ejection fraction (LVEF) from echocardiography, but their performance on cardiac point-of-care ultrasound (POCUS) is not well understood. OBJECTIVES: We evaluate the performance of an ML model for estimation of LVEF on cardiac POCUS compared with Level III echocardiographers' interpretation and formal echo reported LVEF. METHODS: Clinicians at a tertiary care heart failure clinic prospectively scanned 138 participants using hand-carried devices. Video data were analyzed offline by an ML model for LVEF. We compared the ML model's performance with Level III echocardiographers' interpretation and echo reported LVEF. RESULTS: There were 138 participants scanned, yielding 1257 videos. The ML model generated LVEF predictions on 341 videos. We observed a good intraclass correlation (ICC) between the ML model's predictions and the reference standards (ICC = 0.77-0.84). When comparing LVEF estimates for randomized single POCUS videos, the ICC between the ML model and Level III echocardiographers' estimates was 0.772, and it was 0.778 for videos where quantitative LVEF was feasible. When the Level III echocardiographer reviewed all POCUS videos for a participant, the ICC improved to 0.794 and 0.843 when only accounting for studies that could be segmented. The ML model's LVEF estimates also correlated well with LVEF derived from formal echocardiogram reports (ICC = 0.798). CONCLUSION: Our results suggest that clinician-driven cardiac POCUS produces ML model LVEF estimates that correlate well with expert interpretation and echo reported LVEF.

2.
Int J Cardiovasc Imaging ; 39(7): 1313-1321, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37150757

RESUMEN

We sought to determine the cardiac ultrasound view of greatest quality using a machine learning (ML) approach on a cohort of transthoracic echocardiograms (TTE) with abnormal left ventricular (LV) systolic function. We utilize an ML model to determine the TTE view of highest quality when scanned by sonographers. A random sample of TTEs with reported LV dysfunction from 09/25/2017-01/15/2019 were downloaded from the regional database. Component video files were analyzed using ML models that jointly classified view and image quality. The model consisted of convolutional layers for extracting spatial features and Long Short-term Memory units to temporally aggregate the frame-wise spatial embeddings. We report the view-specific quality scores for each TTE. Pair-wise comparisons amongst views were performed with Wilcoxon signed-rank test. Of 1,145 TTEs analyzed by the ML model, 74.5% were from males and mean LV ejection fraction was 43.1 ± 9.9%. Maximum quality score was best for the apical 4 chamber (AP4) view (70.6 ± 13.9%, p<0.001 compared to all other views) and worst for the apical 2 chamber (AP2) view (60.4 ± 15.4%, p<0.001 for all views except parasternal short-axis view at mitral/papillary muscle level, PSAX M/PM). In TTEs scanned by professional sonographers, the view with greatest ML-derived quality was the AP4 view.


Asunto(s)
Ecocardiografía , Disfunción Ventricular Izquierda , Masculino , Humanos , Valor Predictivo de las Pruebas , Ecocardiografía/métodos , Disfunción Ventricular Izquierda/diagnóstico por imagen , Función Ventricular Izquierda/fisiología , Volumen Sistólico , Aprendizaje Automático
3.
IEEE Trans Med Imaging ; 41(4): 793-804, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34705639

RESUMEN

This paper presents U-LanD, a framework for automatic detection of landmarks on key frames of the video by leveraging the uncertainty of landmark prediction. We tackle a specifically challenging problem, where training labels are noisy and highly sparse. U-LanD builds upon a pivotal observation: a deep Bayesian landmark detector solely trained on key video frames, has significantly lower predictive uncertainty on those frames vs. other frames in videos. We use this observation as an unsupervised signal to automatically recognize key frames on which we detect landmarks. As a test-bed for our framework, we use ultrasound imaging videos of the heart, where sparse and noisy clinical labels are only available for a single frame in each video. Using data from 4,493 patients, we demonstrate that U-LanD can exceedingly outperform the state-of-the-art non-Bayesian counterpart by a noticeable absolute margin of 42% in R2 score, with almost no overhead imposed on the model size.


Asunto(s)
Incertidumbre , Teorema de Bayes , Humanos , Ultrasonografía , Grabación en Video/métodos
4.
Int J Comput Assist Radiol Surg ; 15(5): 877-886, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32314226

RESUMEN

PURPOSE:  The emerging market of cardiac handheld ultrasound (US) is on the rise. Despite the advantages in ease of access and the lower cost, a gap in image quality can still be observed between the echocardiography (echo) data captured by point-of-care ultrasound (POCUS) compared to conventional cart-based US, which limits the further adaptation of POCUS. In this work, we aim to present a machine learning solution based on recent advances in adversarial training to investigate the feasibility of translating POCUS echo images to the quality level of high-end cart-based US systems. METHODS:  We propose a constrained cycle-consistent generative adversarial architecture for unpaired translation of cardiac POCUS to cart-based US data. We impose a structured shape-wise regularization via a critic segmentation network to preserve the underlying shape of the heart during quality translation. The proposed deep transfer model is constrained to the anatomy of the left ventricle (LV) in apical two-chamber (AP2) echo views. RESULTS:  A total of 1089 echo studies from 841 patients are used in this study. The AP2 frames are captured by POCUS (Philips Lumify and Clarius) and cart-based (Philips iE33 and Vivid E9) US machines. The dataset of quality translation comprises a total of 441 echo studies from 395 patients. Data from both POCUS and cart-based systems of the same patient were available in 122 cases. The deep-quality transfer model is integrated into a pipeline for an automated cardiac evaluation task, namely segmentation of LV in AP2 view. By transferring the low-quality POCUS data to the cart-based US, a significant average improvement of 30% and 34 mm is obtained in the LV segmentation Dice score and Hausdorff distance metrics, respectively. CONCLUSION:  This paper presents the feasibility of a machine learning solution to transform the image quality of POCUS data to that of high-quality high-end cart-based systems. The experiments show that by leveraging the quality translation through the proposed constrained adversarial training, the accuracy of automatic segmentation with POCUS data could be improved.


Asunto(s)
Ecocardiografía/métodos , Corazón/diagnóstico por imagen , Sistemas de Atención de Punto , Humanos , Aprendizaje Automático
5.
Int J Comput Assist Radiol Surg ; 14(6): 1027-1037, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30941679

RESUMEN

PURPOSE: Left ventricular ejection fraction (LVEF) is one of the key metrics to assess the heart functionality, and cardiac ultrasound (echo) is a standard imaging modality for EF measurement. There is an emerging interest to exploit the point-of-care ultrasound (POCUS) usability due to low cost and ease of access. In this work, we aim to present a computationally efficient mobile application for accurate LVEF estimation. METHODS: Our proposed mobile application for LVEF estimation runs in real time on Android mobile devices that have either a wired or wireless connection to a cardiac POCUS device. We propose a pipeline for biplane ejection fraction estimation using apical two-chamber (AP2) and apical four-chamber (AP4) echo views. A computationally efficient multi-task deep fully convolutional network is proposed for simultaneous LV segmentation and landmark detection in these views, which is integrated into the LVEF estimation pipeline. An adversarial critic model is used in the training phase to impose a shape prior on the LV segmentation output. RESULTS: The system is evaluated on a dataset of 427 patients. Each patient has a pair of captured AP2 and AP4 echo studies, resulting in a total of more than 40,000 echo frames. The mobile system reaches a noticeably high average Dice score of 92% for LV segmentation, an average Euclidean distance error of 2.85 pixels for the detection of anatomical landmarks used in LVEF calculation, and a median absolute error of 6.2% for LVEF estimation compared to the expert cardiologist's annotations and measurements. CONCLUSION: The proposed system runs in real time on mobile devices. The experiments show the effectiveness of the proposed system for automatic LVEF estimation by demonstrating an adequate correlation with the cardiologist's examination.


Asunto(s)
Ecocardiografía/métodos , Sistemas de Atención de Punto , Volumen Sistólico/fisiología , Función Ventricular Izquierda/fisiología , Aprendizaje Profundo , Humanos , Programas Informáticos
6.
Int J Comput Assist Radiol Surg ; 13(8): 1201-1209, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-29589258

RESUMEN

PURPOSE: We have previously proposed temporal enhanced ultrasound (TeUS) as a new paradigm for tissue characterization. TeUS is based on analyzing a sequence of ultrasound data with deep learning and has been demonstrated to be successful for detection of cancer in ultrasound-guided prostate biopsy. Our aim is to enable the dissemination of this technology to the community for large-scale clinical validation. METHODS: In this paper, we present a unified software framework demonstrating near-real-time analysis of ultrasound data stream using a deep learning solution. The system integrates ultrasound imaging hardware, visualization and a deep learning back-end to build an accessible, flexible and robust platform. A client-server approach is used in order to run computationally expensive algorithms in parallel. We demonstrate the efficacy of the framework using two applications as case studies. First, we show that prostate cancer detection using near-real-time analysis of RF and B-mode TeUS data and deep learning is feasible. Second, we present real-time segmentation of ultrasound prostate data using an integrated deep learning solution. RESULTS: The system is evaluated for cancer detection accuracy on ultrasound data obtained from a large clinical study with 255 biopsy cores from 157 subjects. It is further assessed with an independent dataset with 21 biopsy targets from six subjects. In the first study, we achieve area under the curve, sensitivity, specificity and accuracy of 0.94, 0.77, 0.94 and 0.92, respectively, for the detection of prostate cancer. In the second study, we achieve an AUC of 0.85. CONCLUSION: Our results suggest that TeUS-guided biopsy can be potentially effective for the detection of prostate cancer.


Asunto(s)
Biopsia Guiada por Imagen/métodos , Neoplasias de la Próstata/diagnóstico , Ultrasonografía Intervencional/métodos , Algoritmos , Biopsia con Aguja Gruesa , Sistemas de Computación , Humanos , Masculino , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA