Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Curr Cardiol Rep ; 25(6): 535-542, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37115434

RESUMEN

PURPOSE OF REVIEW: Imaging plays a crucial role in the therapy of ventricular tachycardia (VT). We offer an overview of the different methods and provide information on their use in a clinical setting. RECENT FINDINGS: The use of imaging in VT has progressed recently. Intracardiac echography facilitates catheter navigation and the targeting of moving intracardiac structures. Integration of pre-procedural CT or MRI allows for targeting the VT substrate, with major expected impact on VT ablation efficacy and efficiency. Advances in computational modeling may further enhance the performance of imaging, giving access to pre-operative simulation of VT. These advances in non-invasive diagnosis are increasingly being coupled with non-invasive approaches for therapy delivery. This review highlights the latest research on the use of imaging in VT procedures. Image-based strategies are progressively shifting from using images as an adjunct tool to electrophysiological techniques, to an integration of imaging as a central element of the treatment strategy.


Asunto(s)
Ablación por Catéter , Taquicardia Ventricular , Humanos , Taquicardia Ventricular/diagnóstico por imagen , Taquicardia Ventricular/cirugía , Arritmias Cardíacas , Corazón , Frecuencia Cardíaca , Ablación por Catéter/métodos , Resultado del Tratamiento
2.
Med Image Anal ; 83: 102628, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36283200

RESUMEN

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Asunto(s)
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagen
3.
Med Image Anal ; 81: 102528, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35834896

RESUMEN

Accurate computing, analysis and modeling of the ventricles and myocardium from medical images are important, especially in the diagnosis and treatment management for patients suffering from myocardial infarction (MI). Late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) provides an important protocol to visualize MI. However, compared with the other sequences LGE CMR images with gold standard labels are particularly limited. This paper presents the selective results from the Multi-Sequence Cardiac MR (MS-CMR) Segmentation challenge, in conjunction with MICCAI 2019. The challenge offered a data set of paired MS-CMR images, including auxiliary CMR sequences as well as LGE CMR, from 45 patients who underwent cardiomyopathy. It was aimed to develop new algorithms, as well as benchmark existing ones for LGE CMR segmentation focusing on myocardial wall of the left ventricle and blood cavity of the two ventricles. In addition, the paired MS-CMR images could enable algorithms to combine the complementary information from the other sequences for the ventricle segmentation of LGE CMR. Nine representative works were selected for evaluation and comparisons, among which three methods are unsupervised domain adaptation (UDA) methods and the other six are supervised. The results showed that the average performance of the nine methods was comparable to the inter-observer variations. Particularly, the top-ranking algorithms from both the supervised and UDA methods could generate reliable and robust segmentation results. The success of these methods was mainly attributed to the inclusion of the auxiliary sequences from the MS-CMR images, which provide important label information for the training of deep neural networks. The challenge continues as an ongoing resource, and the gold standard segmentation as well as the MS-CMR images of both the training and test data are available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mscmrseg/).


Asunto(s)
Gadolinio , Infarto del Miocardio , Benchmarking , Medios de Contraste , Corazón , Humanos , Imagen por Resonancia Magnética/métodos , Infarto del Miocardio/diagnóstico por imagen , Miocardio/patología
4.
Europace ; 23(23 Suppl 1): i55-i62, 2021 03 04.
Artículo en Inglés | MEDLINE | ID: mdl-33751073

RESUMEN

AIMS: Electrocardiographic imaging (ECGI) is a promising tool to map the electrical activity of the heart non-invasively using body surface potentials (BSP). However, it is still challenging due to the mathematically ill-posed nature of the inverse problem to solve. Novel approaches leveraging progress in artificial intelligence could alleviate these difficulties. METHODS AND RESULTS: We propose a deep learning (DL) formulation of ECGI in order to learn the statistical relation between BSP and cardiac activation. The presented method is based on Conditional Variational AutoEncoders using deep generative neural networks. To quantify the accuracy of this method, we simulated activation maps and BSP data on six cardiac anatomies.We evaluated our model by training it on five different cardiac anatomies (5000 activation maps) and by testing it on a new patient anatomy over 200 activation maps. Due to the probabilistic property of our method, we predicted 10 distinct activation maps for each BSP data. The proposed method is able to generate volumetric activation maps with a good accuracy on the simulated data: the mean absolute error is 9.40 ms with 2.16 ms standard deviation on this testing set. CONCLUSION: The proposed formulation of ECGI enables to naturally include imaging information in the estimation of cardiac electrical activity from BSP. It naturally takes into account all the spatio-temporal correlations present in the data. We believe these features can help improve ECGI results.


Asunto(s)
Aprendizaje Profundo , Inteligencia Artificial , Mapeo del Potencial de Superficie Corporal , Electrocardiografía , Corazón/diagnóstico por imagen , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA