Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
2.
Neural Netw ; 163: 354-366, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37099898

RESUMEN

Federated Learning (FL) can learn a global model across decentralized data over different clients. However, it is susceptible to statistical heterogeneity of client-specific data. Clients focus on optimizing for their individual target distributions, which would yield divergence of the global model due to inconsistent data distributions. Moreover, federated learning approaches adhere to the scheme of collaboratively learning representations and classifiers, further exacerbating such inconsistency and resulting in imbalanced features and biased classifiers. Hence, in this paper, we propose an independent two-stage personalized FL framework, i.e., Fed-RepPer, to separate representation learning from classification in federated learning. First, the client-side feature representation models are learned using supervised contrastive loss, which enables local objectives consistently, i.e., learning robust representations on distinct data distributions. Local representation models are aggregated into the common global representation model. Then, in the second stage, personalization is studied by learning different classifiers for each client based on the global representation model. The proposed two-stage learning scheme is examined in lightweight edge computing that involves devices with constrained computation resources. Experiments on various datasets (CIFAR-10/100, CINIC-10) and heterogeneous data setups show that Fed-RepPer outperforms alternatives by utilizing flexibility and personalization on non-IID data.

3.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9029-9039, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35286266

RESUMEN

Optimization algorithms are of great importance to efficiently and effectively train a deep neural network. However, the existing optimization algorithms show unsatisfactory convergence behavior, either slowly converging or not seeking to avoid bad local optima. Learning rate dropout (LRD) is a new gradient descent technique to motivate faster convergence and better generalization. LRD aids the optimizer to actively explore in the parameter space by randomly dropping some learning rates (to 0); at each iteration, only parameters whose learning rate is not 0 are updated. Since LRD reduces the number of parameters to be updated for each iteration, the convergence becomes easier. For parameters that are not updated, their gradients are accumulated (e.g., momentum) by the optimizer for the next update. Accumulating multiple gradients at fixed parameter positions gives the optimizer more energy to escape from the saddle point and bad local optima. Experiments show that LRD is surprisingly effective in accelerating training while preventing overfitting.

4.
IEEE Trans Med Imaging ; 41(9): 2457-2468, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35363612

RESUMEN

Synthesizing a subject-specific pathology-free image from a pathological image is valuable for algorithm development and clinical practice. In recent years, several approaches based on the Generative Adversarial Network (GAN) have achieved promising results in pseudo-healthy synthesis. However, the discriminator (i.e., a classifier) in the GAN cannot accurately identify lesions and further hampers from generating admirable pseudo-healthy images. To address this problem, we present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images. Then, we apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem existing in medical image segmentation. Furthermore, a reliable metric is proposed by utilizing two attributes of label noise to measure the health of synthetic images. Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods. The method achieves better performance than the existing methods with only 30% of the training data. The effectiveness of the proposed method is also demonstrated on the LiTS and the T1 modality of BraTS. The code and the pre-trained model of this study are publicly available at https://github.com/Au3C2/Generator-Versus-Segmentor.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...