Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Med Image Anal ; 97: 103256, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-39047605

RESUMEN

Recently, large pretrained vision foundation models based on masked image modeling (MIM) have attracted unprecedented attention and achieved remarkable performance across various tasks. However, the study of MIM for ultrasound imaging remains relatively unexplored, and most importantly, current MIM approaches fail to account for the gap between natural images and ultrasound, as well as the intrinsic imaging characteristics of the ultrasound modality, such as the high noise-to-signal ratio. In this paper, motivated by the unique high noise-to-signal ratio property in ultrasound, we propose a deblurring MIM approach specialized to ultrasound, which incorporates a deblurring task into the pretraining proxy task. The incorporation of deblurring facilitates the pretraining to better recover the subtle details within ultrasound images that are vital for subsequent downstream analysis. Furthermore, we employ a multi-scale hierarchical encoder to extract both local and global contextual cues for improved performance, especially on pixel-wise tasks such as segmentation. We conduct extensive experiments involving 280,000 ultrasound images for the pretraining and evaluate the downstream transfer performance of the pretrained model on various disease diagnoses (nodule, Hashimoto's thyroiditis) and task types (classification, segmentation). The experimental results demonstrate the efficacy of the proposed deblurring MIM, achieving state-of-the-art performance across a wide range of downstream tasks and datasets. Overall, our work highlights the potential of deblurring MIM for ultrasound image analysis, presenting an ultrasound-specific vision foundation model.

2.
Med Image Anal ; 94: 103153, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569380

RESUMEN

Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Humanos , Pie Diabético/diagnóstico por imagen , Redes Neurales de la Computación , Benchmarking , Procesamiento de Imagen Asistido por Computador/métodos
3.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3388-3405, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38090829

RESUMEN

The training and inference of Graph Neural Networks (GNNs) are costly when scaling up to large-scale graphs. Graph Lottery Ticket (GLT) has presented the first attempt to accelerate GNN inference on large-scale graphs by jointly pruning the graph structure and the model weights. Though promising, GLT encounters robustness and generalization issues when deployed in real-world scenarios, which are also long-standing and critical problems in deep learning ideology. In real-world scenarios, the distribution of unseen test data is typically diverse. We attribute the failures on out-of-distribution (OOD) data to the incapability of discerning causal patterns, which remain stable amidst distribution shifts. In traditional spase graph learning, the model performance deteriorates dramatically as the graph/network sparsity exceeds a certain high level. Worse still, the pruned GNNs are hard to generalize to unseen graph data due to limited training set at hand. To tackle these issues, we propose the Resilient Graph Lottery Ticket (RGLT) to find more robust and generalizable GLT in GNNs. Concretely, we reactivate a fraction of weights/edges by instantaneous gradient information at each pruning point. After sufficient pruning, we conduct environmental interventions to extrapolate potential test distribution. Finally, we perform last several rounds of model averages to further improve generalization. We provide multiple examples and theoretical analyses that underpin the universality and reliability of our proposal. Further, RGLT has been experimentally verified across various independent identically distributed (IID) and out-of-distribution (OOD) graph benchmarks.

4.
IEEE J Biomed Health Inform ; 27(10): 4914-4925, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37486830

RESUMEN

Ultrasound based estimation of fetal biometry is extensively used to diagnose prenatal abnormalities and to monitor fetal growth, for which accurate segmentation of the fetal anatomy is a crucial prerequisite. Although deep neural network-based models have achieved encouraging results on this task, inevitable distribution shifts in ultrasound images can still result in severe performance drop in real world deployment scenarios. In this article, we propose a complete ultrasound fetal examination system to deal with this troublesome problem by repairing and screening the anatomically implausible results. Our system consists of three main components: A routine segmentation network, a fetal anatomical key points guided repair network, and a shape-coding based selective screener. Guided by the anatomical key points, our repair network has stronger cross-domain repair capabilities, which can substantially improve the outputs of the segmentation network. By quantifying the distance between an arbitrary segmentation mask to its corresponding anatomical shape class, the proposed shape-coding based selective screener can then effectively reject the entire implausible results that cannot be fully repaired. Extensive experiments demonstrate that our proposed framework has strong anatomical guarantee and outperforms other methods in three different cross-domain scenarios.


Asunto(s)
Feto , Procesamiento de Imagen Asistido por Computador , Ultrasonografía Prenatal , Femenino , Humanos , Embarazo , Biometría , Feto/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Ultrasonografía
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...