Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Ultrasound Med ; 41(4): 855-863, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-34133034

RESUMO

OBJECTIVES: To test deep learning (DL) algorithm performance repercussions by introducing novel ultrasound equipment into a clinical setting. METHODS: Researchers introduced prospectively obtained inferior vena cava (IVC) videos from a similar patient population using novel ultrasound equipment to challenge a previously validated DL algorithm (trained on a common point of care ultrasound [POCUS] machine) to assess IVC collapse. Twenty-one new videos were obtained for each novel ultrasound machine. The videos were analyzed for complete collapse by the algorithm and by 2 blinded POCUS experts. Cohen's kappa was calculated for agreement between the 2 POCUS experts and DL algorithm. Previous testing showed substantial agreement between algorithm and experts with Cohen's kappa of 0.78 (95% CI 0.49-1.0) and 0.66 (95% CI 0.31-1.0) on new patient data using, the same ultrasound equipment. RESULTS: Challenged with higher image quality (IQ) POCUS cart ultrasound videos, algorithm performance declined with kappa values of 0.31 (95% CI 0.19-0.81) and 0.39 (95% CI 0.11-0.89), showing fair agreement. Algorithm performance plummeted on a lower IQ, smartphone device with a kappa value of -0.09 (95% CI -0.95 to 0.76) and 0.09 (95% CI -0.65 to 0.82), respectively, showing less agreement than would be expected by chance. Two POCUS experts had near perfect agreement with a kappa value of 0.88 (95% CI 0.64-1.0) regarding IVC collapse. CONCLUSIONS: Performance of this previously validated DL algorithm worsened when faced with ultrasound studies from 2 novel ultrasound machines. Performance was much worse on images from a lower IQ hand-held device than from a superior cart-based device.


Assuntos
Aprendizado Profundo , Algoritmos , Humanos , Sistemas Automatizados de Assistência Junto ao Leito , Ultrassonografia/métodos , Veia Cava Inferior/diagnóstico por imagem
2.
J Ultrasound Med ; 41(8): 2059-2069, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34820867

RESUMO

OBJECTIVES: A paucity of point-of-care ultrasound (POCUS) databases limits machine learning (ML). Assess feasibility of training ML algorithms to visually estimate left ventricular ejection fraction (EF) from a subxiphoid (SX) window using only apical 4-chamber (A4C) images. METHODS: Researchers used a long-short-term-memory algorithm for image analysis. Using the Stanford EchoNet-Dynamic database of 10,036 A4C videos with calculated exact EF, researchers tested 3 ML training permeations. First, training on unaltered Stanford A4C videos, then unaltered and 90° clockwise (CW) rotated videos and finally unaltered, 90° rotated and horizontally flipped videos. As a real-world test, we obtained 615 SX videos from Harbor-UCLA (HUCLA) with EF calculations in 5% ranges. Researchers performed 1000 randomizations of EF point estimation within HUCLA EF ranges to compensate for ML and HUCLA EF mismatch, obtaining a mean value for absolute error (MAE) comparison and performed Bland-Altman analyses. RESULTS: The ML algorithm EF mean MAE was estimated at 23.0, with a range of 22.8-23.3 using unaltered A4C video, mean MAE was 16.7, with a range of 16.5-16.9 using unaltered and 90° CW rotated video, mean MAE was 16.6, with a range of 16.3-16.8 using unaltered, 90° CW rotated and horizontally flipped video training. Bland-Altman showed weakest agreement at 40-45% EF. CONCLUSIONS: Researchers successfully adapted unrelated ultrasound window data to train a POCUS ML algorithm with fair MAE using data manipulation to simulate a different ultrasound examination. This may be important for future POCUS algorithm design to help overcome a paucity of POCUS databases.


Assuntos
Inteligência Artificial , Função Ventricular Esquerda , Algoritmos , Ecocardiografia/métodos , Humanos , Aprendizado de Máquina , Volume Sistólico
4.
J Am Coll Emerg Physicians Open ; 1(5): 857-864, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33145532

RESUMO

OBJECTIVES: We sought to create a deep learning algorithm to determine the degree of inferior vena cava (IVC) collapsibility in critically ill patients to enable novice point-of-care ultrasound (POCUS) providers. METHODS: We used publicly available long short term memory (LSTM) deep learning basic architecture that can track temporal changes and relationships in real-time video, to create an algorithm for ultrasound video analysis. The algorithm was trained on public domain IVC ultrasound videos to improve its ability to recognize changes in varied ultrasound video. A total of 220 IVC videos were used, 10% of the data was randomly used for cross correlation during training. Data were augmented through video rotation and manipulation to multiply effective training data quantity. After training, the algorithm was tested on the 50 new IVC ultrasound video obtained from public domain sources and not part of the data set used in training or cross validation. Fleiss' κ was calculated to compare level of agreement between the 3 POCUS experts and between deep learning algorithm and POCUS experts. RESULTS: There was very substantial agreement between the 3 POCUS experts with κ = 0.65 (95% CI = 0.49-0.81). Agreement between experts and algorithm was moderate with κ = 0.45 (95% CI = 0.33-0.56). CONCLUSIONS: Our algorithm showed good agreement with POCUS experts in visually estimating degree of IVC collapsibility that has been shown in previously published studies to differentiate fluid responsive from fluid unresponsive septic shock patients. Such an algorithm could be adopted to run in real-time on any ultrasound machine with a video output, easing the burden on novice POCUS users by limiting their task to obtaining and maintaining a sagittal proximal IVC view and allowing the artificial intelligence make real-time determinations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA