Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Ultrasound Med Biol ; 50(6): 825-832, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38423896

RESUMEN

OBJECTIVE: B-lines assessed by lung ultrasound (LUS) outperform physical exam, chest radiograph, and biomarkers for the associated diagnosis of acute heart failure (AHF) in the emergent setting. The use of LUS is however limited to trained professionals and suffers from interpretation variability. The objective was to utilize transfer learning to create an AI-enabled software that can aid novice users to automate LUS B-line interpretation. METHODS: Data from an observational AHF LUS study provided standardized cine clips for AI model development and evaluation. A total of 49,952 LUS frames from 30 patients were hand scored and trained on a convolutional neural network (CNN) to interpret B-lines at the frame level. A random independent evaluation set of 476 LUS clips from 60 unique patients assessed model performance. The AI models scored the clips on both a binary and ordinal 0-4 multiclass assessment. RESULTS: A multiclassification AI algorithm had the best performance at the binary level when applied to the independent evaluation set, AUC of 0.967 (95% CI 0.965-0.970) for detecting pathologic conditions. When compared to expert blinded reviewer, the 0-4 multiclassification AI algorithm scale had a reported linear weighted kappa of 0.839 (95% CI 0.804-0.871). CONCLUSIONS: The multiclassification AI algorithm is a robust and well performing model at both binary and ordinal multiclass B-line evaluation. This algorithm has the potential to be integrated into clinical workflows to assist users with quantitative and objective B-line assessment for evaluation of AHF.


Asunto(s)
Insuficiencia Cardíaca , Pulmón , Ultrasonografía , Humanos , Insuficiencia Cardíaca/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Ultrasonografía/métodos , Enfermedad Aguda , Masculino , Femenino , Anciano , Persona de Mediana Edad , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático
2.
Science ; 384(6701): eadh9979, 2024 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-38870291

RESUMEN

Understanding cellular architectures and their connectivity is essential for interrogating system function and dysfunction. However, we lack technologies for mapping the multiscale details of individual cells and their connectivity in the human organ-scale system. We developed a platform that simultaneously extracts spatial, molecular, morphological, and connectivity information of individual cells from the same human brain. The platform includes three core elements: a vibrating microtome for ultraprecision slicing of large-scale tissues without losing cellular connectivity (MEGAtome), a polymer hydrogel-based tissue processing technology for multiplexed multiscale imaging of human organ-scale tissues (mELAST), and a computational pipeline for reconstructing three-dimensional connectivity across multiple brain slabs (UNSLICE). We applied this platform for analyzing human Alzheimer's disease pathology at multiple scales and demonstrating scalable neural connectivity mapping in the human brain.


Asunto(s)
Enfermedad de Alzheimer , Encéfalo , Imagen Molecular , Humanos , Enfermedad de Alzheimer/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Imagen Molecular/métodos , Fenotipo , Hidrogeles/química , Conectoma
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 238-242, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36085649

RESUMEN

As advances in microscopy imaging provide an ever clearer window into the human brain, accurate reconstruction of neural connectivity can yield valuable insight into the relationship between brain structure and function. However, human manual tracing is a slow and laborious task, and requires domain expertise. Automated methods are thus needed to enable rapid and accurate analysis at scale. In this paper, we explored deep neural networks for dense axon tracing and incorporated axon topological information into the loss function with a goal to improve the performance on both voxel-based segmentation and axon centerline detection. We evaluated three approaches using a modified 3D U-Net architecture trained on a mouse brain dataset imaged with light sheet microscopy and achieved a 10% increase in axon tracing accuracy over previous methods. Furthermore, the addition of centerline awareness in the loss function outperformed the baseline approach across all metrics, including a boost in Rand Index by 8%.


Asunto(s)
Algoritmos , Imagenología Tridimensional , Animales , Axones , Encéfalo/diagnóstico por imagen , Humanos , Imagenología Tridimensional/métodos , Ratones , Redes Neurales de la Computación
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1675-1681, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086232

RESUMEN

Lung ultrasound (LUS) as a diagnostic tool is gaining support for its role in the diagnosis and management of COVID-19 and a number of other lung pathologies. B-lines are a predominant feature in COVID-19, however LUS requires a skilled clinician to interpret findings. To facilitate the interpretation, our main objective was to develop automated methods to classify B-lines as pathologic vs. normal. We developed transfer learning models based on ResNet networks to classify B-lines as pathologic (at least 3 B-lines per lung field) vs. normal using COVID-19 LUS data. Assessment of B-line severity on a 0-4 multi-class scale was also explored. For binary B-line classification, at the frame-level, all ResNet models pretrained with ImageNet yielded higher performance than the baseline nonpretrained ResNet-18. Pretrained ResNet-18 has the best Equal Error Rate (EER) of 9.1% vs the baseline of 11.9%. At the clip-level, all pretrained network models resulted in better Cohen's kappa agreement (linear-weighted) and clip score accuracy, with the pretrained ResNet-18 having the best Cohen's kappa of 0.815 [95% CI: 0.804-0.826], and ResNet-101 the best clip scoring accuracy of 93.6%. Similar results were shown for multi-class scoring, where pretrained network models outperformed the baseline model. A class activation map is also presented to guide clinicians in interpreting LUS findings. Future work aims to further improve the multi-class assessment for severity of B-lines with a more diverse LUS dataset.


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico por imagen , Humanos , Pulmón/diagnóstico por imagen , Tórax , Ultrasonografía
5.
Biosensors (Basel) ; 11(12)2021 Dec 18.
Artículo en Inglés | MEDLINE | ID: mdl-34940279

RESUMEN

Hemorrhage is a leading cause of trauma death, particularly in prehospital environments when evacuation is delayed. Obtaining central vascular access to a deep artery or vein is important for administration of emergency drugs and analgesics, and rapid replacement of blood volume, as well as invasive sensing and emerging life-saving interventions. However, central access is normally performed by highly experienced critical care physicians in a hospital setting. We developed a handheld AI-enabled interventional device, AI-GUIDE (Artificial Intelligence Guided Ultrasound Interventional Device), capable of directing users with no ultrasound or interventional expertise to catheterize a deep blood vessel, with an initial focus on the femoral vein. AI-GUIDE integrates with widely available commercial portable ultrasound systems and guides a user in ultrasound probe localization, venous puncture-point localization, and needle insertion. The system performs vascular puncture robotically and incorporates a preloaded guidewire to facilitate the Seldinger technique of catheter insertion. Results from tissue-mimicking phantom and porcine studies under normotensive and hypotensive conditions provide evidence of the technique's robustness, with key performance metrics in a live porcine model including: a mean time to acquire femoral vein insertion point of 53 ± 36 s (5 users with varying experience, in 20 trials), a total time to insert catheter of 80 ± 30 s (1 user, in 6 trials), and a mean number of 1.1 (normotensive, 39 trials) and 1.3 (hypotensive, 55 trials) needle insertion attempts (1 user). These performance metrics in a porcine model are consistent with those for experienced medical providers performing central vascular access on humans in a hospital.


Asunto(s)
Cateterismo Venoso Central , Procedimientos Quirúrgicos Robotizados , Ultrasonografía Intervencional , Animales , Inteligencia Artificial , Vena Femoral/diagnóstico por imagen , Humanos , Porcinos
6.
Med Phys ; 46(11): 4803-4815, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-31408539

RESUMEN

PURPOSE: In computed tomography (CT), miscalibrated or imperfect detector elements produce stripe artifacts in the sinogram. The stripe artifacts in Radon space are responsible for concentric ring artifacts in the reconstructed images. In this work, a novel optimization model is proposed to remove the ring artifacts in an iterative reconstruction procedure. METHOD: In the proposed optimization model, a novel ring total variation (RTV) regularization is developed to penalize the ring artifacts in the image domain. Moreover, to correct the sinogram, a new correcting vector is proposed to compensate for malfunctioning of detectors in the projection domain. The optimization problem is solved by using the alternating minimization scheme (AMS). In each iteration, the fidelity term along with the RTV regularization is solved using the alternating direction method of multipliers (ADMM) to find the image, and then the correcting coefficient vector is updated for certain detectors according to the obtained image. Because the sinogram and the image are simultaneously updated, the proposed method basically performs in both image and sinogram domains. RESULTS: The proposed method is evaluated using both simulated and physical phantom datasets containing different ring artifact patterns. In the simulated datasets, the Shepp-Logan phantom, a real chest scan image and a noisy low-contrast phantom are considered for the performance evaluation of our method. We compare the quantitative root mean square error (RMSE) and structural similarity (SSIM) results of our algorithm with wavelet-Fourier sinogram filtering method by Munch et al., the ring artifact reduction method by Brun et al., and the TV-based ring correction method by Paleo and Mirone. Our proposed method is also evaluated using a physical phantom dataset where strong ring artifacts are manifest due to the miscalibration of a large number of detectors. Our proposed method outperforms the competing methods in terms of both qualitative and quantitative evaluation results. CONCLUSION: The experimental results in both simulated and physical phantom datasets show that the proposed method achieves the state-of-the-art ring artifact reduction performance in terms of RMSE, SSIM, and subjective visual quality.


Asunto(s)
Algoritmos , Artefactos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X , Análisis de Fourier , Fantasmas de Imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA