Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Crit Care Med ; 51(2): 301-309, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36661454

RESUMEN

OBJECTIVES: To evaluate the accuracy of a bedside, real-time deployment of a deep learning (DL) model capable of distinguishing between normal (A line pattern) and abnormal (B line pattern) lung parenchyma on lung ultrasound (LUS) in critically ill patients. DESIGN: Prospective, observational study evaluating the performance of a previously trained LUS DL model. Enrolled patients received a LUS examination with simultaneous DL model predictions using a portable device. Clip-level model predictions were analyzed and compared with blinded expert review for A versus B line pattern. Four prediction thresholding approaches were applied to maximize model sensitivity and specificity at bedside. SETTING: Academic ICU. PATIENTS: One-hundred critically ill patients admitted to ICU, receiving oxygen therapy, and eligible for respiratory imaging were included. Patients who were unstable or could not undergo an LUS examination were excluded. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: A total of 100 unique ICU patients (400 clips) were enrolled from two tertiary-care sites. Fifty-six patients were mechanically ventilated. When compared with gold standard expert annotation, the real-time inference yielded an accuracy of 95%, sensitivity of 93%, and specificity of 96% for identification of the B line pattern. Varying prediction thresholds showed that real-time modification of sensitivity and specificity according to clinical priorities is possible. CONCLUSIONS: A previously validated DL classification model performs equally well in real-time at the bedside when platformed on a portable device. As the first study to test the feasibility and performance of a DL classification model for LUS in a dedicated ICU environment, our results justify further inquiry into the impact of employing real-time automation of medical imaging into the care of the critically ill.


Asunto(s)
Enfermedad Crítica , Aprendizaje Profundo , Humanos , Estudios Prospectivos , Enfermedad Crítica/terapia , Pulmón/diagnóstico por imagen , Ultrasonografía/métodos , Unidades de Cuidados Intensivos
2.
Comput Biol Med ; 148: 105953, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35985186

RESUMEN

Pneumothorax is a potentially life-threatening condition that can be rapidly and accurately assessed via the lung sliding artefact generated using lung ultrasound (LUS). Access to LUS is challenged by user dependence and shortage of training. Image classification using deep learning methods can automate interpretation in LUS and has not been thoroughly studied for lung sliding. Using a labelled LUS dataset from 2 academic hospitals, clinical B-mode (also known as brightness or two-dimensional mode) videos featuring both presence and absence of lung sliding were transformed into motion (M) mode images. These images were subsequently used to train a deep neural network binary classifier that was evaluated using a holdout set comprising 15% of the total data. Grad-CAM explanations were examined. Our binary classifier using the EfficientNetB0 architecture was trained using 2535 LUS clips from 614 patients. When evaluated on a test set of data uninvolved in training (540 clips from 124 patients), the model performed with a sensitivity of 93.5%, specificity of 87.3% and an area under the receiver operating characteristic curve (AUC) of 0.973. Grad-CAM explanations confirmed the model's focus on relevant regions on M-mode images. Our solution accurately distinguishes between the presence and absence of lung sliding artefacts on LUS.


Asunto(s)
Aprendizaje Profundo , Neumotórax , Artefactos , Humanos , Pulmón , Ultrasonografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA