Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Med Image Comput Comput Assist Interv ; 2023: 448-458, 2023 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-38655383

RESUMEN

We introduce MHVAE, a deep hierarchical variational autoencoder (VAE) that synthesizes missing images from various modalities. Extending multi-modal VAEs with a hierarchical latent structure, we introduce a probabilistic formulation for fusing multi-modal images in a common latent representation while having the flexibility to handle incomplete image sets as input. Moreover, adversarial learning is employed to generate sharper images. Extensive experiments are performed on the challenging problem of joint intra-operative ultrasound (iUS) and Magnetic Resonance (MR) synthesis. Our model outperformed multi-modal VAEs, conditional GANs, and the current state-of-the-art unified method (ResViT) for synthesizing missing images, demonstrating the advantage of using a hierarchical latent representation and a principled probabilistic fusion operation. Our code is publicly available.

2.
Med Image Anal ; 83: 102628, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36283200

RESUMEN

Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.


Asunto(s)
Neuroma Acústico , Humanos , Neuroma Acústico/diagnóstico por imagen
3.
IEEE Trans Med Imaging ; 42(3): 697-712, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36264729

RESUMEN

Image registration is a fundamental medical image analysis task, and a wide variety of approaches have been proposed. However, only a few studies have comprehensively compared medical image registration approaches on a wide range of clinically relevant tasks. This limits the development of registration methods, the adoption of research advances into practice, and a fair benchmark across competing approaches. The Learn2Reg challenge addresses these limitations by providing a multi-task medical image registration data set for comprehensive characterisation of deformable registration algorithms. A continuous evaluation will be possible at https://learn2reg.grand-challenge.org. Learn2Reg covers a wide range of anatomies (brain, abdomen, and thorax), modalities (ultrasound, CT, MR), availability of annotations, as well as intra- and inter-patient registration evaluation. We established an easily accessible framework for training and validation of 3D registration methods, which enabled the compilation of results of over 65 individual method submissions from more than 20 unique teams. We used a complementary set of metrics, including robustness, accuracy, plausibility, and runtime, enabling unique insight into the current state-of-the-art of medical image registration. This paper describes datasets, tasks, evaluation methods and results of the challenge, as well as results of further analysis of transferability to new datasets, the importance of label supervision, and resulting bias. While no single approach worked best across all tasks, many methodological aspects could be identified that push the performance of medical image registration to new state-of-the-art performance. Furthermore, we demystified the common belief that conventional registration methods have to be much slower than deep-learning-based methods.


Asunto(s)
Cavidad Abdominal , Aprendizaje Profundo , Humanos , Algoritmos , Encéfalo/diagnóstico por imagen , Abdomen/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos
4.
Biomolecules ; 10(4)2020 03 30.
Artículo en Inglés | MEDLINE | ID: mdl-32235589

RESUMEN

Lung cancer is one of the leading causes of death worldwide. Therefore, understanding the factors linked to patient survival is essential. Recently, multi-omics analysis has emerged, allowing for patient groups to be classified according to prognosis and at a more individual level, to support the use of precision medicine. Here, we combined RNA expression and miRNA expression with clinical information, to conduct a multi-omics analysis, using publicly available datasets (the cancer genome atlas (TCGA) focusing on lung adenocarcinoma (LUAD)). We were able to successfully subclass patients according to survival. The classifiers we developed, using inferred labels obtained from patient subtypes showed that a support vector machine (SVM), gave the best classification results, with an accuracy of 0.82 with the test dataset. Using these subtypes, we ranked genes based on RNA expression levels. The top 25 genes were investigated, to elucidate the mechanisms that underlie patient prognosis. Bioinformatics analyses showed that the expression levels of six out of 25 genes (ERO1B, DPY19L1, NCAM1, RET, MARCH1, and SLC7A8) were associated with LUAD patient survival (p < 0.05), and pathway analyses indicated that major cancer signaling was altered in the subtypes.


Asunto(s)
Adenocarcinoma del Pulmón/diagnóstico , Adenocarcinoma del Pulmón/genética , Genómica , Anciano , Femenino , Humanos , Estimación de Kaplan-Meier , Masculino , Persona de Mediana Edad , Pronóstico , Máquina de Vectores de Soporte , Aprendizaje Automático no Supervisado
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA