Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Publication year range
1.
J Cardiovasc Magn Reson ; 25(1): 15, 2023 02 27.
Artículo en Inglés | MEDLINE | ID: mdl-36849960

RESUMEN

BACKGROUND: Cardiac shape modeling is a useful computational tool that has provided quantitative insights into the mechanisms underlying dysfunction in heart disease. The manual input and time required to make cardiac shape models, however, limits their clinical utility. Here we present an end-to-end pipeline that uses deep learning for automated view classification, slice selection, phase selection, anatomical landmark localization, and myocardial image segmentation for the automated generation of three-dimensional, biventricular shape models. With this approach, we aim to make cardiac shape modeling a more robust and broadly applicable tool that has processing times consistent with clinical workflows. METHODS: Cardiovascular magnetic resonance (CMR) images from a cohort of 123 patients with repaired tetralogy of Fallot (rTOF) from two internal sites were used to train and validate each step in the automated pipeline. The complete automated pipeline was tested using CMR images from a cohort of 12 rTOF patients from an internal site and 18 rTOF patients from an external site. Manually and automatically generated shape models from the test set were compared using Euclidean projection distances, global ventricular measurements, and atlas-based shape mode scores. RESULTS: The mean absolute error (MAE) between manually and automatically generated shape models in the test set was similar to the voxel resolution of the original CMR images for end-diastolic models (MAE = 1.9 ± 0.5 mm) and end-systolic models (MAE = 2.1 ± 0.7 mm). Global ventricular measurements computed from automated models were in good agreement with those computed from manual models. The average mean absolute difference in shape mode Z-score between manually and automatically generated models was 0.5 standard deviations for the first 20 modes of a reference statistical shape atlas. CONCLUSIONS: Using deep learning, accurate three-dimensional, biventricular shape models can be reliably created. This fully automated end-to-end approach dramatically reduces the manual input required to create shape models, thereby enabling the rapid analysis of large-scale datasets and the potential to deploy statistical atlas-based analyses in point-of-care clinical settings. Training data and networks are available from cardiacatlas.org.


Asunto(s)
Aprendizaje Profundo , Tetralogía de Fallot , Humanos , Tetralogía de Fallot/diagnóstico por imagen , Tetralogía de Fallot/cirugía , Valor Predictivo de las Pruebas , Ventrículos Cardíacos , Diástole
2.
Phys Med Biol ; 67(9)2022 04 27.
Artículo en Inglés | MEDLINE | ID: mdl-35395657

RESUMEN

Objective.In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake.Approach.In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with 'ground truth' radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network.Results.When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities.Significance.Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Fantasmas de Imagen , Tomografía de Emisión de Positrones
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda