Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Diagnostics (Basel) ; 14(11)2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38893608

RESUMEN

Deep learning (DL) models for medical image classification frequently struggle to generalize to data from outside institutions. Additional clinical data are also rarely collected to comprehensively assess and understand model performance amongst subgroups. Following the development of a single-center model to identify the lung sliding artifact on lung ultrasound (LUS), we pursued a validation strategy using external LUS data. As annotated LUS data are relatively scarce-compared to other medical imaging data-we adopted a novel technique to optimize the use of limited external data to improve model generalizability. Externally acquired LUS data from three tertiary care centers, totaling 641 clips from 238 patients, were used to assess the baseline generalizability of our lung sliding model. We then employed our novel Threshold-Aware Accumulative Fine-Tuning (TAAFT) method to fine-tune the baseline model and determine the minimum amount of data required to achieve predefined performance goals. A subgroup analysis was also performed and Grad-CAM++ explanations were examined. The final model was fine-tuned on one-third of the external dataset to achieve 0.917 sensitivity, 0.817 specificity, and 0.920 area under the receiver operator characteristic curve (AUC) on the external validation dataset, exceeding our predefined performance goals. Subgroup analyses identified LUS characteristics that most greatly challenged the model's performance. Grad-CAM++ saliency maps highlighted clinically relevant regions on M-mode images. We report a multicenter study that exploits limited available external data to improve the generalizability and performance of our lung sliding model while identifying poorly performing subgroups to inform future iterative improvements. This approach may contribute to efficiencies for DL researchers working with smaller quantities of external validation data.

2.
Lab Chip ; 23(14): 3245-3257, 2023 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-37350658

RESUMEN

The requirement for rapid, in-field detection of cyanotoxins in water resources necessitates the developing of an easy-to-use and miniaturized system for their detection. We present a novel bead-based, competitive fluorescence assay for multiplexed detection of two types of toxins: microcystin-LR (MC-LR) and okadaic acid (OA). To automate the detection process, a reusable microfluidic device, termed toxin-chip, was designed and validated. The toxin-chip consists of a micromixer where the target toxins were efficiently mixed with a reagent solution, and a detection chamber for magnetic retainment of beads for downstream analysis. Quantum dots (QDs) were used as the reporter molecules to enhance the sensitivity of the assay and the emitted fluorescence signal from QDs was reversely proportional to the amount of toxins in the solution. An image analysis program was also developed to further automate the detection and analysis steps. Two toxins were simultaneously analyzed on a single microfluidic chip, and the device exhibited a low detection limit of 10-4 µg ml-1 for MC-LR and 4 × 10-5 µg ml-1 for OA detection. The bead-based, competitive assay also showed remarkable chemical specificity against potential interfering toxins. We also validated the device performance using natural lake water samples from Sunfish Lake of Waterloo. The toxin-chip holds promise as a versatile and simple quantification tool for cyanotoxin detection, with the potential of detecting more toxins.


Asunto(s)
Toxinas Marinas , Microfluídica , Contaminantes Químicos del Agua , Contaminantes Químicos del Agua/análisis , Ácido Ocadaico/análisis , Toxinas Marinas/análisis
3.
Crit Care Med ; 51(2): 301-309, 2023 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-36661454

RESUMEN

OBJECTIVES: To evaluate the accuracy of a bedside, real-time deployment of a deep learning (DL) model capable of distinguishing between normal (A line pattern) and abnormal (B line pattern) lung parenchyma on lung ultrasound (LUS) in critically ill patients. DESIGN: Prospective, observational study evaluating the performance of a previously trained LUS DL model. Enrolled patients received a LUS examination with simultaneous DL model predictions using a portable device. Clip-level model predictions were analyzed and compared with blinded expert review for A versus B line pattern. Four prediction thresholding approaches were applied to maximize model sensitivity and specificity at bedside. SETTING: Academic ICU. PATIENTS: One-hundred critically ill patients admitted to ICU, receiving oxygen therapy, and eligible for respiratory imaging were included. Patients who were unstable or could not undergo an LUS examination were excluded. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: A total of 100 unique ICU patients (400 clips) were enrolled from two tertiary-care sites. Fifty-six patients were mechanically ventilated. When compared with gold standard expert annotation, the real-time inference yielded an accuracy of 95%, sensitivity of 93%, and specificity of 96% for identification of the B line pattern. Varying prediction thresholds showed that real-time modification of sensitivity and specificity according to clinical priorities is possible. CONCLUSIONS: A previously validated DL classification model performs equally well in real-time at the bedside when platformed on a portable device. As the first study to test the feasibility and performance of a DL classification model for LUS in a dedicated ICU environment, our results justify further inquiry into the impact of employing real-time automation of medical imaging into the care of the critically ill.


Asunto(s)
Enfermedad Crítica , Aprendizaje Profundo , Humanos , Estudios Prospectivos , Enfermedad Crítica/terapia , Pulmón/diagnóstico por imagen , Ultrasonografía/métodos , Unidades de Cuidados Intensivos
4.
Diagnostics (Basel) ; 12(10)2022 Sep 28.
Artículo en Inglés | MEDLINE | ID: mdl-36292042

RESUMEN

BACKGROUND: Annotating large medical imaging datasets is an arduous and expensive task, especially when the datasets in question are not organized according to deep learning goals. Here, we propose a method that exploits the hierarchical organization of annotating tasks to optimize efficiency. METHODS: We trained a machine learning model to accurately distinguish between one of two classes of lung ultrasound (LUS) views using 2908 clips from a larger dataset. Partitioning the remaining dataset by view would reduce downstream labelling efforts by enabling annotators to focus on annotating pathological features specific to each view. RESULTS: In a sample view-specific annotation task, we found that automatically partitioning a 780-clip dataset by view saved 42 min of manual annotation time and resulted in 55±6 additional relevant labels per hour. CONCLUSIONS: Automatic partitioning of a LUS dataset by view significantly increases annotator efficiency, resulting in higher throughput relevant to the annotating task at hand. The strategy described in this work can be applied to other hierarchical annotation schemes.

5.
Comput Biol Med ; 148: 105953, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35985186

RESUMEN

Pneumothorax is a potentially life-threatening condition that can be rapidly and accurately assessed via the lung sliding artefact generated using lung ultrasound (LUS). Access to LUS is challenged by user dependence and shortage of training. Image classification using deep learning methods can automate interpretation in LUS and has not been thoroughly studied for lung sliding. Using a labelled LUS dataset from 2 academic hospitals, clinical B-mode (also known as brightness or two-dimensional mode) videos featuring both presence and absence of lung sliding were transformed into motion (M) mode images. These images were subsequently used to train a deep neural network binary classifier that was evaluated using a holdout set comprising 15% of the total data. Grad-CAM explanations were examined. Our binary classifier using the EfficientNetB0 architecture was trained using 2535 LUS clips from 614 patients. When evaluated on a test set of data uninvolved in training (540 clips from 124 patients), the model performed with a sensitivity of 93.5%, specificity of 87.3% and an area under the receiver operating characteristic curve (AUC) of 0.973. Grad-CAM explanations confirmed the model's focus on relevant regions on M-mode images. Our solution accurately distinguishes between the presence and absence of lung sliding artefacts on LUS.


Asunto(s)
Aprendizaje Profundo , Neumotórax , Artefactos , Humanos , Pulmón , Ultrasonografía
6.
Diagnostics (Basel) ; 11(11)2021 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-34829396

RESUMEN

Lung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts. In this multicenter study, we seek to automate the clinically vital distinction between A line (normal parenchyma) and B line (abnormal parenchyma) on LUS by training a customized neural network using 272,891 labelled LUS images. After external validation on 23,393 frames, pragmatic clinical application at the clip level was performed on 1162 videos. The trained classifier demonstrated an area under the receiver operating curve (AUC) of 0.96 (±0.02) through 10-fold cross-validation on local frames and an AUC of 0.93 on the external validation dataset. Clip-level inference yielded sensitivities and specificities of 90% and 92% (local) and 83% and 82% (external), respectively, for detecting the B line pattern. This study demonstrates accurate deep-learning-enabled LUS interpretation between normal and abnormal lung parenchyma on ultrasound frames while rendering diagnostically important sensitivity and specificity at the video clip level.

7.
Sci Rep ; 8(1): 9055, 2018 06 13.
Artículo en Inglés | MEDLINE | ID: mdl-29899430

RESUMEN

A novel imaging-driven technique with an integrated fluorescence signature to enable automated enumeration of two species of cyanobacteria and an alga of somewhat similar morphology to one of the cyanobacteria is presented to demonstrate proof-of-concept that high accuracy, imaging-based, rapid water quality analysis can be with conventional equipment available in typical water quality laboratories-this is not currently available. The results presented herein demonstrate that the developed method identifies and enumerates cyanobacterial cells at a level equivalent to or better than that achieved using standard manual microscopic enumeration techniques, but in less time, and requiring significantly fewer resources. When compared with indirect measurement methods, the proposed method provides better accuracy at both low and high cell concentrations. It extends the detection range for cell enumeration while maintaining accuracy and increasing enumeration speed. The developed method not only accurately estimates cell concentrations, but it also reliably distinguishes between cells of Anabaena flos-aquae, Microcystis aeruginosa, and Ankistrodesmus in mixed cultures by taking advantage of additional contrast between the target cell and complex background gained under fluorescent light. Thus, the proposed image-driven approach offers promise as a robust and cost-effective tool for identifying and enumerating microscopic cells based on their unique morphological features.


Asunto(s)
Anabaena/citología , Chlorophyceae/citología , Fluorescencia , Microcystis/citología , Anabaena/química , Anabaena/crecimiento & desarrollo , Chlorophyceae/química , Chlorophyceae/crecimiento & desarrollo , Análisis Costo-Beneficio , Técnicas Microbiológicas/economía , Técnicas Microbiológicas/métodos , Microcystis/química , Microcystis/crecimiento & desarrollo , Reproducibilidad de los Resultados
8.
Sci Rep ; 6: 28665, 2016 06 27.
Artículo en Inglés | MEDLINE | ID: mdl-27346434

RESUMEN

The simultaneous capture of imaging data at multiple wavelengths across the electromagnetic spectrum is highly challenging, requiring complex and costly multispectral image devices. In this study, we investigate the feasibility of simultaneous multispectral imaging using conventional image sensors with color filter arrays via a novel comprehensive framework for numerical demultiplexing of the color image sensor measurements. A numerical forward model characterizing the formation of sensor measurements from light spectra hitting the sensor is constructed based on a comprehensive spectral characterization of the sensor. A numerical demultiplexer is then learned via non-linear random forest modeling based on the forward model. Given the learned numerical demultiplexer, one can then demultiplex simultaneously-acquired measurements made by the color image sensor into reflectance intensities at discrete selectable wavelengths, resulting in a higher resolution reflectance spectrum. Experimental results demonstrate the feasibility of such a method for the purpose of simultaneous multispectral imaging.


Asunto(s)
Color , Procesamiento de Imagen Asistido por Computador , Modelos Teóricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...