Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Crit Care Med ; 51(2): 301-309, 2023 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-36661454

RESUMO

OBJECTIVES: To evaluate the accuracy of a bedside, real-time deployment of a deep learning (DL) model capable of distinguishing between normal (A line pattern) and abnormal (B line pattern) lung parenchyma on lung ultrasound (LUS) in critically ill patients. DESIGN: Prospective, observational study evaluating the performance of a previously trained LUS DL model. Enrolled patients received a LUS examination with simultaneous DL model predictions using a portable device. Clip-level model predictions were analyzed and compared with blinded expert review for A versus B line pattern. Four prediction thresholding approaches were applied to maximize model sensitivity and specificity at bedside. SETTING: Academic ICU. PATIENTS: One-hundred critically ill patients admitted to ICU, receiving oxygen therapy, and eligible for respiratory imaging were included. Patients who were unstable or could not undergo an LUS examination were excluded. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: A total of 100 unique ICU patients (400 clips) were enrolled from two tertiary-care sites. Fifty-six patients were mechanically ventilated. When compared with gold standard expert annotation, the real-time inference yielded an accuracy of 95%, sensitivity of 93%, and specificity of 96% for identification of the B line pattern. Varying prediction thresholds showed that real-time modification of sensitivity and specificity according to clinical priorities is possible. CONCLUSIONS: A previously validated DL classification model performs equally well in real-time at the bedside when platformed on a portable device. As the first study to test the feasibility and performance of a DL classification model for LUS in a dedicated ICU environment, our results justify further inquiry into the impact of employing real-time automation of medical imaging into the care of the critically ill.


Assuntos
Estado Terminal , Aprendizado Profundo , Humanos , Estudos Prospectivos , Estado Terminal/terapia , Pulmão/diagnóstico por imagem , Ultrassonografia/métodos , Unidades de Terapia Intensiva
2.
Diagnostics (Basel) ; 14(11)2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38893608

RESUMO

Deep learning (DL) models for medical image classification frequently struggle to generalize to data from outside institutions. Additional clinical data are also rarely collected to comprehensively assess and understand model performance amongst subgroups. Following the development of a single-center model to identify the lung sliding artifact on lung ultrasound (LUS), we pursued a validation strategy using external LUS data. As annotated LUS data are relatively scarce-compared to other medical imaging data-we adopted a novel technique to optimize the use of limited external data to improve model generalizability. Externally acquired LUS data from three tertiary care centers, totaling 641 clips from 238 patients, were used to assess the baseline generalizability of our lung sliding model. We then employed our novel Threshold-Aware Accumulative Fine-Tuning (TAAFT) method to fine-tune the baseline model and determine the minimum amount of data required to achieve predefined performance goals. A subgroup analysis was also performed and Grad-CAM++ explanations were examined. The final model was fine-tuned on one-third of the external dataset to achieve 0.917 sensitivity, 0.817 specificity, and 0.920 area under the receiver operator characteristic curve (AUC) on the external validation dataset, exceeding our predefined performance goals. Subgroup analyses identified LUS characteristics that most greatly challenged the model's performance. Grad-CAM++ saliency maps highlighted clinically relevant regions on M-mode images. We report a multicenter study that exploits limited available external data to improve the generalizability and performance of our lung sliding model while identifying poorly performing subgroups to inform future iterative improvements. This approach may contribute to efficiencies for DL researchers working with smaller quantities of external validation data.

3.
Comput Biol Med ; 148: 105953, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35985186

RESUMO

Pneumothorax is a potentially life-threatening condition that can be rapidly and accurately assessed via the lung sliding artefact generated using lung ultrasound (LUS). Access to LUS is challenged by user dependence and shortage of training. Image classification using deep learning methods can automate interpretation in LUS and has not been thoroughly studied for lung sliding. Using a labelled LUS dataset from 2 academic hospitals, clinical B-mode (also known as brightness or two-dimensional mode) videos featuring both presence and absence of lung sliding were transformed into motion (M) mode images. These images were subsequently used to train a deep neural network binary classifier that was evaluated using a holdout set comprising 15% of the total data. Grad-CAM explanations were examined. Our binary classifier using the EfficientNetB0 architecture was trained using 2535 LUS clips from 614 patients. When evaluated on a test set of data uninvolved in training (540 clips from 124 patients), the model performed with a sensitivity of 93.5%, specificity of 87.3% and an area under the receiver operating characteristic curve (AUC) of 0.973. Grad-CAM explanations confirmed the model's focus on relevant regions on M-mode images. Our solution accurately distinguishes between the presence and absence of lung sliding artefacts on LUS.


Assuntos
Aprendizado Profundo , Pneumotórax , Artefatos , Humanos , Pulmão , Ultrassonografia
4.
Diagnostics (Basel) ; 12(10)2022 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-36292042

RESUMO

BACKGROUND: Annotating large medical imaging datasets is an arduous and expensive task, especially when the datasets in question are not organized according to deep learning goals. Here, we propose a method that exploits the hierarchical organization of annotating tasks to optimize efficiency. METHODS: We trained a machine learning model to accurately distinguish between one of two classes of lung ultrasound (LUS) views using 2908 clips from a larger dataset. Partitioning the remaining dataset by view would reduce downstream labelling efforts by enabling annotators to focus on annotating pathological features specific to each view. RESULTS: In a sample view-specific annotation task, we found that automatically partitioning a 780-clip dataset by view saved 42 min of manual annotation time and resulted in 55±6 additional relevant labels per hour. CONCLUSIONS: Automatic partitioning of a LUS dataset by view significantly increases annotator efficiency, resulting in higher throughput relevant to the annotating task at hand. The strategy described in this work can be applied to other hierarchical annotation schemes.

5.
Diagnostics (Basel) ; 11(11)2021 Nov 04.
Artigo em Inglês | MEDLINE | ID: mdl-34829396

RESUMO

Lung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts. In this multicenter study, we seek to automate the clinically vital distinction between A line (normal parenchyma) and B line (abnormal parenchyma) on LUS by training a customized neural network using 272,891 labelled LUS images. After external validation on 23,393 frames, pragmatic clinical application at the clip level was performed on 1162 videos. The trained classifier demonstrated an area under the receiver operating curve (AUC) of 0.96 (±0.02) through 10-fold cross-validation on local frames and an AUC of 0.93 on the external validation dataset. Clip-level inference yielded sensitivities and specificities of 90% and 92% (local) and 83% and 82% (external), respectively, for detecting the B line pattern. This study demonstrates accurate deep-learning-enabled LUS interpretation between normal and abnormal lung parenchyma on ultrasound frames while rendering diagnostically important sensitivity and specificity at the video clip level.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA