Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros










Base de datos
Asunto principal
Intervalo de año de publicación
1.
Interface Focus ; 13(6): 20230043, 2023 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-38106918

RESUMEN

Modelling complex systems, like the human heart, has made great progress over the last decades. Patient-specific models, called 'digital twins', can aid in diagnosing arrhythmias and personalizing treatments. However, building highly accurate predictive heart models requires a delicate balance between mathematical complexity, parameterization from measurements and validation of predictions. Cardiac electrophysiology (EP) models range from complex biophysical models to simplified phenomenological models. Complex models are accurate but computationally intensive and challenging to parameterize, while simplified models are computationally efficient but less realistic. In this paper, we propose a hybrid approach by leveraging deep learning to complete a simplified cardiac model from data. Our novel framework has two components, decomposing the dynamics into a physics based and a data-driven term. This construction allows our framework to learn from data of different complexity, while simultaneously estimating model parameters. First, using in silico data, we demonstrate that this framework can reproduce the complex dynamics of cardiac transmembrane potential even in the presence of noise in the data. Second, using ex vivo optical data of action potentials (APs), we demonstrate that our framework can identify key physical parameters for anatomical zones with different electrical properties, as well as to reproduce the AP wave characteristics obtained from various pacing locations. Our physics-based data-driven approach may improve cardiac EP modelling by providing a robust biophysical tool for predictions.

2.
Med Image Anal ; 83: 102628, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36283200

RESUMEN

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Asunto(s)
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA