Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Ann Biomed Eng ; 52(8): 2101-2117, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38691234

RESUMEN

Parotid gland tumors account for approximately 2% to 10% of head and neck tumors. Segmentation of parotid glands and tumors on magnetic resonance images is essential in accurately diagnosing and selecting appropriate surgical plans. However, segmentation of parotid glands is particularly challenging due to their variable shape and low contrast with surrounding structures. Recently, deep learning has developed rapidly, and Transformer-based networks have performed well on many computer vision tasks. However, Transformer-based networks have yet to be well used in parotid gland segmentation tasks. We collected a multi-center multimodal parotid gland MRI dataset and implemented parotid gland segmentation using a purely Transformer-based U-shaped segmentation network. We used both absolute and relative positional encoding to improve parotid gland segmentation and achieved multimodal information fusion without increasing the network computation. In addition, our novel training approach reduces the clinician's labeling workload by nearly half. Our method achieved good segmentation of both parotid glands and tumors. On the test set, our model achieved a Dice-Similarity Coefficient of 86.99%, Pixel Accuracy of 99.19%, Mean Intersection over Union of 81.79%, and Hausdorff Distance of 3.87. The purely Transformer-based U-shaped segmentation network we used outperforms other convolutional neural networks. In addition, our method can effectively fuse the information from multi-center multimodal MRI dataset, thus improving the parotid gland segmentation.


Asunto(s)
Imagen por Resonancia Magnética , Glándula Parótida , Neoplasias de la Parótida , Humanos , Glándula Parótida/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neoplasias de la Parótida/diagnóstico por imagen , Aprendizaje Profundo , Imagen Multimodal/métodos , Redes Neurales de la Computación , Masculino
2.
Med Phys ; 51(8): 5295-5307, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38749016

RESUMEN

BACKGROUND: Segmentation of the parotid glands and tumors by MR images is essential for treating parotid gland tumors. However, segmentation of the parotid glands is particularly challenging due to their variable shape and low contrast with surrounding structures. PURPOSE: The lack of large and well-annotated datasets limits the development of deep learning in medical images. As an unsupervised learning method, contrastive learning has seen rapid development in recent years. It can better use unlabeled images and is hopeful to improve parotid gland segmentation. METHODS: We propose Swin MoCo, a momentum contrastive learning network with Swin Transformer as its backbone. The ImageNet supervised model is used as the initial weights of Swin MoCo, thus improving the training effects on small medical image datasets. RESULTS: Swin MoCo trained with transfer learning improves parotid gland segmentation to 89.78% DSC, 85.18% mIoU, 3.60 HD, and 90.08% mAcc. On the Synapse multi-organ computed tomography (CT) dataset, using Swin MoCo as the pre-trained model of Swin-Unet yields 79.66% DSC and 12.73 HD, which outperforms the best result of Swin-Unet on the Synapse dataset. CONCLUSIONS: The above improvements require only 4 h of training on a single NVIDIA Tesla V100, which is computationally cheap. Swin MoCo provides new approaches to improve the performance of tasks on small datasets. The code is publicly available at https://github.com/Zian-Xu/Swin-MoCo.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Glándula Parótida , Glándula Parótida/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Aprendizaje Profundo
3.
Comput Biol Med ; 161: 107037, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37230020

RESUMEN

The development of deep learning models in medical image analysis is majorly limited by the lack of large-sized and well-annotated datasets. Unsupervised learning does not require labels and is more suitable for solving medical image analysis problems. However, most unsupervised learning methods must be applied to large datasets. To make unsupervised learning applicable to small datasets, we proposed Swin MAE, a masked autoencoder with Swin Transformer as its backbone. Even on a dataset of only a few thousand medical images, Swin MAE can still learn useful semantic features purely from images without using any pre-trained models. It can equal or even slightly outperform the supervised model obtained by Swin Transformer trained on ImageNet in the transfer learning results of downstream tasks. Compared to MAE, Swin MAE brought a performance improvement of twice and five times for downstream tasks on BTCV and our parotid dataset, respectively. The code is publicly available at https://github.com/Zian-Xu/Swin-MAE.


Asunto(s)
Glándula Parótida , Solución de Problemas , Semántica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...