Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38329857

RESUMEN

The high cost of acquiring and annotating samples has made the "few-shot" learning problem of prime importance. Existing works mainly focus on improving performance on clean data and overlook robustness concerns on the data perturbed with adversarial noise. Recently, a few efforts have been made to combine the few-shot problem with the robustness objective using sophisticated meta-learning techniques. These methods rely on the generation of adversarial samples in every episode of training, which further adds to the computational burden. To avoid such time-consuming and complicated procedures, we propose a simple but effective alternative that does not require any adversarial samples. Inspired by the cognitive decision-making process in humans, we enforce high-level feature matching between the base class data and their corresponding low-frequency samples in the pretraining stage via self distillation. The model is then fine-tuned on the samples of novel classes where we additionally improve the discriminability of low-frequency query set features via cosine similarity. On a one-shot setting of the CIFAR-FS dataset, our method yields a massive improvement of 60.55% and 62.05% in adversarial accuracy on the projected gradient descent (PGD) and state-of-the-art auto attack, respectively, with a minor drop in clean accuracy compared to the baseline. Moreover, our method only takes 1.69× of the standard training time while being ≈ 5× faster than thestate-of-the-art adversarial meta-learning methods. The code is available at https://github.com/vcl-iisc/robust-few-shot-learning.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...