Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Banco de datos
Tipo de estudio
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38656835

RESUMEN

Automated cardiac segmentation from two-dimensional (2D) echocardiographic images is a crucial step toward improving clinical diagnosis. Anatomical heterogeneity and inherent noise, however, present technical challenges and lower segmentation accuracy. The objective of this study is to propose a method for the automatic segmentation of the ventricular endocardium, the myocardium, and the left atrium, in order to accurately determine clinical indices. Specifically, we suggest using the recently introduced pixel-to-pixel Generative Adversarial Network (Pix2Pix GAN) model for accurate segmentation. To accomplish this, we integrate the backbone PatchGAN model for the discriminator and the UNET for the generator, for building the Pix2Pix GAN. The resulting model produces precisely segmented images, thanks to UNET's capability for precise segmentation and PatchGAN's capability for fine-grained discrimination. For the experimental validation, we use the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset, which consists of echocardiographic images from 500 patients in 2-chamber (2CH) and 4-chamber (4CH) views at the end-diastolic (ED) and end-systolic (ES) phases. Similarly to state-of-the-art studies on the same dataset, we followed the same train-test splits. Our results demonstrate that the proposed GAN-based technique improves segmentation performance for clinical and geometrical parameters compared to the state-of-the-art methods. More precisely, throughout the ED and ES phases, the mean Dice values for the left ventricular endocardium reached 0.961 and 0.930 for 2CH, and 0.959 and 0.950 for 4CH, respectively. Furthermore, the average ejection fraction correlation and Mean Absolute Error obtained were 0.95 and 3.2ml for 2CH, and 0.98 and 2.1ml for 4CH, outperforming the state-of-the-art results.

2.
IEEE Trans Med Imaging ; 43(5): 1690-1701, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38145542

RESUMEN

Ultrasound localization microscopy (ULM) allows for the generation of super-resolved (SR) images of the vasculature by precisely localizing intravenously injected microbubbles. Although SR images may be useful for diagnosing and treating patients, their use in the clinical context is limited by the need for prolonged acquisition times and high frame rates. The primary goal of our study is to relax the requirement of high frame rates to obtain SR images. To this end, we propose a new time-efficient ULM (TEULM) pipeline built on a cutting-edge interpolation method. More specifically, we suggest employing Radial Basis Functions (RBFs) as interpolators to estimate the missing values in the 2-dimensional (2D) spatio-temporal structures. To evaluate this strategy, we first mimic the data acquisition at a reduced frame rate by applying a down-sampling (DS = 2, 4, 8, and 10) factor to high frame rate ULM data. Then, we up-sample the data to the original frame rate using the suggested interpolation to reconstruct the missing frames. Finally, using both the original high frame rate data and the interpolated one, we reconstruct SR images using the ULM framework steps. We evaluate the proposed TEULM using four in vivo datasets, a Rat brain (dataset A), a Rat kidney (dataset B), a Rat tumor (dataset C) and a Rat brain bolus (dataset D), interpolating at the in-phase and quadrature (IQ) level. Results demonstrate the effectiveness of TEULM in recovering vascular structures, even at a DS rate of 10 (corresponding to a frame rate of sub-100Hz). In conclusion, the proposed technique is successful in reconstructing accurate SR images while requiring frame rates of one order of magnitude lower than standard ULM.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Animales , Ratas , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Acústica/métodos , Riñón/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Encéfalo/irrigación sanguínea , Microburbujas , Microscopía/métodos , Ultrasonografía/métodos
3.
Vasc Endovascular Surg ; 58(6): 645-650, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38335135

RESUMEN

OBJECTIVE: Static 3-dimensional (3D) printing became attractive for operative planning in cases that involve difficult anatomy. An interactive (low cost, fast) 3D print allowing deliberate surgical practice can be used to improve interventional simulation and planning. BACKGROUND: Endovascular treatment of complex aortic aneurysms is technically challenging, especially in case of narrow aortic lumen or significant aortic angulation (hostile anatomy). The risk of complications such as graft kinking and target vessel occlusion is difficult to assess based solely on traditional software measuring methods and remain highly dependent on surgeon skills and expertise. METHODS: A patient with juxtarenal AAA with hostile anatomy had a 3-dimensional printed model constructed preoperatively according to computed tomography images. Endovascular graft implantation in the 3D printed aorta with a standard T-Branch Cook (Cook® Medical, Bloomington, IN, USA) was performed preoperatively in the simulation laboratory enabling optimized feasibility, surgical planning and intraoperative decision making. RESULTS: The 3D printed aortic model proved to be radio-opaque and allowed simulation of branched endovascular aortic repair (BREVAR). The assessment of intervention feasibility, as well as optimal branch position and orientation was found to be useful for surgeon confidence and the actual intervention in the patient. There was a remarkable agreement between the 3D printed model and both CT and X-ray angiographic images. Although the technical success was achieved as planned, a previously deployed renal stent caused unexpected difficulty in advancing the renal stent, which was not observed in the 3D model simulation. CONCLUSION: The 3D printed aortic models can be useful for determining feasibility, optimizing planning and intraoperative decision making in hostile anatomy improving the outcome. Despite already offering satisfying accuracy at present, further advancements could enhance the 3D model capability to replicate minor anatomical deformities and variations in tissue density.


Asunto(s)
Aneurisma de la Aorta Abdominal , Implantación de Prótesis Vascular , Procedimientos Endovasculares , Impresión Tridimensional , Humanos , Aneurisma de la Aorta Abdominal/cirugía , Aneurisma de la Aorta Abdominal/diagnóstico por imagen , Aortografía , Prótesis Vascular , Implantación de Prótesis Vascular/instrumentación , Toma de Decisiones Clínicas , Angiografía por Tomografía Computarizada , Procedimientos Endovasculares/instrumentación , Modelos Cardiovasculares , Modelación Específica para el Paciente , Valor Predictivo de las Pruebas , Diseño de Prótesis , Stents , Cirugía Asistida por Computador , Resultado del Tratamiento
4.
Comput Biol Med ; 169: 107885, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38141447

RESUMEN

Since the outbreak of COVID-19, efforts have been made towards semi-quantitative analysis of lung ultrasound (LUS) data to assess the patient's condition. Several methods have been proposed in this regard, with a focus on frame-level analysis, which was then used to assess the condition at the video and prognostic levels. However, no extensive work has been done to analyze lung conditions directly at the video level. This study proposes a novel method for video-level scoring based on compression of LUS video data into a single image and automatic classification to assess patient's condition. The method utilizes maximum, mean, and minimum intensity projection-based compression of LUS video data over time. This enables to preserve hyper- and hypo-echoic data regions, while compressing the video down to a maximum of three images. The resulting images are then classified using a convolutional neural network (CNN). Finally, the worst predicted score given among the images is assigned to the corresponding video. The results show that this compression technique can achieve a promising agreement at the prognostic level (81.62%), while the video-level agreement remains comparable with the state-of-the-art (46.19%). Conclusively, the suggested method lays down the foundation for LUS video compression, shifting from frame-level to direct video-level analysis of LUS data.


Asunto(s)
COVID-19 , Compresión de Datos , Humanos , Pulmón/diagnóstico por imagen , Ultrasonografía/métodos , Redes Neurales de la Computación
5.
Radiol Artif Intell ; 6(2): e230147, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38381039

RESUMEN

See also the commentary by Sitek in this issue. Supplemental material is available for this article.


Asunto(s)
Neumonía , Niño , Humanos , Zambia , Pulmón , Tórax
6.
Comput Biol Med ; 180: 109014, 2024 Aug 19.
Artículo en Inglés | MEDLINE | ID: mdl-39163826

RESUMEN

Pneumonia is the leading cause of death among children around the world. According to WHO, a total of 740,180 lives under the age of five were lost due to pneumonia in 2019. Lung ultrasound (LUS) has been shown to be particularly useful for supporting the diagnosis of pneumonia in children and reducing mortality in resource-limited settings. The wide application of point-of-care ultrasound at the bedside is limited mainly due to a lack of training for data acquisition and interpretation. Artificial Intelligence can serve as a potential tool to automate and improve the LUS data interpretation process, which mainly involves analysis of hyper-echoic horizontal and vertical artifacts, and hypo-echoic small to large consolidations. This paper presents, Fused Lung Ultrasound Encoding-based Transformer (FLUEnT), a novel pediatric LUS video scoring framework for detecting lung consolidations using fused LUS encodings. Frame-level embeddings from a variational autoencoder, features from a spatially attentive ResNet-18, and encoded patient information as metadata combiningly form the fused encodings. These encodings are then passed on to the transformer for binary classification of the presence or absence of consolidations in the video. The video-level analysis using fused encodings resulted in a mean balanced accuracy of 89.3 %, giving an average improvement of 4.7 % points in comparison to when using these encodings individually. In conclusion, outperforming the state-of-the-art models by an average margin of 8 % points, our proposed FLUEnT framework serves as a benchmark for detecting lung consolidations in LUS videos from pediatric pneumonia patients.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA