Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Microcirculation ; 29(6-7): e12770, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35611457

RESUMEN

OBJECTIVE: Monitoring microcirculation and visualizing microvasculature are critical for providing diagnosis to medical professionals and guiding clinical interventions. Ultrasound provides a medium for monitoring and visualization; however, there are challenges due to the complex microscale geometry of the vasculature and difficulties associated with quantifying perfusion. Here, we studied established and state-of-the-art ultrasonic modalities (using six probes) to compare their detection of slow flow in small microvasculature. METHODS: Five ultrasonic modalities were studied: grayscale, color Doppler, power Doppler, superb microvascular imaging (SMI), and microflow imaging (MFI), using six linear probes across two ultrasound scanners. Image readability was blindly scored by radiologists and quantified for evaluation. Vasculature visualization was investigated both in vitro (resolution and flow characterization) and in vivo (fingertip microvasculature detection). RESULTS: Superb Microvascular Imaging (SMI) and Micro Flow Imaging (MFI) modalities provided superior images when compared with conventional ultrasound imaging modalities both in vitro and in vivo. The choice of probe played a significant difference in detectability. The slowest flow detected (in the lab) was 0.1885 ml/s and small microvasculature of the fingertip were visualized. CONCLUSIONS: Our data demonstrated that SMI and MFI used with vascular probes operating at higher frequencies provided resolutions acceptable for microvasculature visualization, paving the path for future development of ultrasound devices for microcirculation monitoring.


Asunto(s)
Microvasos , Ultrasonografía Doppler , Microcirculación , Ultrasonografía/métodos , Microvasos/diagnóstico por imagen , Ultrasonografía Doppler/métodos
2.
Cardiovasc Digit Health J ; 3(1): 2-13, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35265930

RESUMEN

Background: Visualizing fibrosis on cardiac magnetic resonance (CMR) imaging with contrast enhancement (late gadolinium enhancement; LGE) is paramount in characterizing disease progression and identifying arrhythmia substrates. Segmentation and fibrosis quantification from LGE-CMR is intensive, manual, and prone to interobserver variability. There is an unmet need for automated LGE-CMR image segmentation that ensures anatomical accuracy and seamless extraction of clinical features. Objective: This study aimed to develop a novel deep learning solution for analysis of contrast-enhanced CMR images that produces anatomically accurate myocardium and scar/fibrosis segmentations and uses these to calculate features of clinical interest. Methods: Data sources were 155 2-dimensional LGE-CMR patient scans (1124 slices) and 246 synthetic "LGE-like" scans (1360 slices) obtained from cine CMR using a novel style-transfer algorithm. We trained and tested a 3-stage neural network that identified the left ventricle (LV) region of interest (ROI), segmented ROI into viable myocardium and regions of enhancement, and postprocessed the segmentation results to enforce conforming to anatomical constraints. The segmentations were used to directly compute clinical features, such as LV volume and scar burden. Results: Predicted LV and scar segmentations achieved 96% and 75% balanced accuracy, respectively, and 0.93 and 0.57 Dice coefficient when compared to trained expert segmentations. The mean scar burden difference between manual and predicted segmentations was 2%. Conclusion: We developed and validated a deep neural network for automatic, anatomically accurate expert-level LGE- CMR myocardium and scar/fibrosis segmentation, allowing direct calculation of clinical measures. Given the training set heterogeneity, our approach could be extended to multiple imaging modalities and patient pathologies.

3.
Front Surg ; 9: 1040066, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36532130

RESUMEN

Objects accidentally left behind in the brain following neurosurgical procedures may lead to life-threatening health complications and invasive reoperation. One of the most commonly retained surgical items is the cotton ball, which absorbs blood to clear the surgeon's field of view yet in the process becomes visually indistinguishable from the brain parenchyma. However, using ultrasound imaging, the different acoustic properties of cotton and brain tissue result in two discernible materials. In this study, we created a fully automated foreign body object tracking algorithm that integrates into the clinical workflow to detect and localize retained cotton balls in the brain. This deep learning algorithm uses a custom convolutional neural network and achieves 99% accuracy, sensitivity, and specificity, and surpasses other comparable algorithms. Furthermore, the trained algorithm was implemented into web and smartphone applications with the ability to detect one cotton ball in an uploaded ultrasound image in under half of a second. This study also highlights the first use of a foreign body object detection algorithm using real in-human datasets, showing its ability to prevent accidental foreign body retention in a translational setting.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA