Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; PP2024 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-38427546

RESUMO

Generalizable medical image segmentation enables models to generalize to unseen target domains under domain shift issues. Recent progress demonstrates that the shape of the segmentation objective, with its high consistency and robustness across domains, can serve as a reliable regularization to aid the model for better cross-domain performance, where existing methods typically seek a shared framework to render segmentation maps and shape prior concurrently. However, due to the inherent texture and style preference of modern deep neural networks, the edge or silhouette of the extracted shape will inevitably be undermined by those domain-specific texture and style interferences of medical images under domain shifts. To address this limitation, we devise a novel framework with a separation between the shape regularization and the segmentation map. Specifically, we first customize a novel whitening transform-based probabilistic shape regularization extractor namely WT-PSE to suppress undesirable domain-specific texture and style interferences, leading to more robust and high-quality shape representations. Second, we deliver a Wasserstein distance-guided knowledge distillation scheme to help the WT-PSE to achieve more flexible shape extraction during the inference phase. Finally, by incorporating domain knowledge of medical images, we propose a novel instance-domain whitening transform method to facilitate a more stable training process with improved performance. Experiments demonstrate the performance of our proposed method on both multi-domain and single-domain generalization.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14938-14955, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37669193

RESUMO

Few-shot learning, especially few-shot image classification, has received increasing attention and witnessed significant advances in recent years. Some recent studies implicitly show that many generic techniques or "tricks", such as data augmentation, pre-training, knowledge distillation, and self-supervision, may greatly boost the performance of a few-shot learning method. Moreover, different works may employ different software platforms, backbone architectures and input image sizes, making fair comparisons difficult and practitioners struggle with reproducibility. To address these situations, we propose a comprehensive library for few-shot learning (LibFewShot) by re-implementing eighteen state-of-the-art few-shot learning methods in a unified framework with the same single codebase in PyTorch. Furthermore, based on LibFewShot, we provide comprehensive evaluations on multiple benchmarks with various backbone architectures to evaluate common pitfalls and effects of different training tricks. In addition, with respect to the recent doubts on the necessity of meta- or episodic-training mechanism, our evaluation results confirm that such a mechanism is still necessary especially when combined with pre-training. We hope our work can not only lower the barriers for beginners to enter the area of few-shot learning but also elucidate the effects of nontrivial tricks to facilitate intrinsic research on few-shot learning.

3.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14514-14527, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37773899

RESUMO

Domain generalization (DG) refers to the problem of generalizing machine learning systems to out-of-distribution (OOD) data with knowledge learned from several provided source domains. Most prior works confine themselves to stationary and discrete environments to tackle such generalization issue arising from OOD data. However, in practice, many tasks in non-stationary environments (e.g., autonomous-driving car system, sensor measurement) involve more complex and continuously evolving domain drift, emerging new challenges for model deployment. In this paper, we first formulate this setting as the problem of evolving domain generalization. To deal with the continuously changing domains, we propose MMD-LSAE, a novel framework that learns to capture the evolving patterns among domains for better generalization. Specifically, MMD-LSAE characterizes OOD data in non-stationary environments with two types of distribution shifts: covariate shift and concept shift, and employs deep autoencoder modules to infer their dynamics in latent space separately. In these modules, the inferred posterior distributions of latent codes are optimized to align with their corresponding prior distributions via minimizing maximum mean discrepancy (MMD). We theoretically verify that MMD-LSAE has the inherent capability to implicitly facilitate mutual information maximization, which can promote superior representation learning and improved generalization of the model. Furthermore, the experimental results on both synthetic and real-world datasets show that our proposed approach can consistently achieve favorable performance based on the evolving domain generalization setting.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...