Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
2.
IEEE Trans Cybern ; 52(5): 3519-3530, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-32755874

RESUMEN

Visual tracking is typically solved as a discriminative learning problem that usually requires high-quality samples for online model adaptation. It is a critical and challenging problem to evaluate the training samples collected from previous predictions and employ sample selection by their quality to train the model. To tackle the above problem, we propose a joint discriminative learning scheme with the progressive multistage optimization policy of sample selection for robust visual tracking. The proposed scheme presents a novel time-weighted and detection-guided self-paced learning strategy for easy-to-hard sample selection, which is capable of tolerating relatively large intraclass variations while maintaining interclass separability. Such a self-paced learning strategy is jointly optimized in conjunction with the discriminative tracking process, resulting in robust tracking results. Experiments on the benchmark datasets demonstrate the effectiveness of the proposed learning framework.


Asunto(s)
Aprendizaje
3.
IEEE Trans Image Process ; 30: 3154-3166, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33617453

RESUMEN

Photorealistic style transfer is a challenging task, which demands the stylized image remains real. Existing methods are still suffering from unrealistic artifacts and heavy computational cost. In this paper, we propose a novel Style-Corpus Constrained Learning (SCCL) scheme to address these issues. The style-corpus with the style-specific and style-agnostic characteristics simultaneously is proposed to constrain the stylized image with the style consistency among different samples, which improves photorealism of stylization output. By using adversarial distillation learning strategy, a simple fast-to-execute network is trained to substitute previous complex feature transforms models, which reduces the computational cost significantly. Experiments demonstrate that our method produces rich-detailed photorealistic images, with 13 ~ 50 times faster than the state-of-the-art method (WCT2).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA