Uncertainty-Aware Dual-Evidential Learning for Weakly-Supervised Temporal Action Localization.
IEEE Trans Pattern Anal Mach Intell
; 45(12): 15896-15911, 2023 Dec.
Article
em En
| MEDLINE
| ID: mdl-37624714
Weakly-supervised temporal action localization (WTAL) aims to localize the action instances and recognize their categories with only video-level labels. Despite great progress, existing methods suffer from severe action-background ambiguity, which mainly arises from background noise and neglect of non-salient action snippets. To address this issue, we propose a generalized evidential deep learning (EDL) framework for WTAL, called Uncertainty-aware Dual-Evidential Learning (UDEL), which extends the traditional paradigm of EDL to adapt to the weakly-supervised multi-label classification goal with the guidance of epistemic and aleatoric uncertainties, of which the former comes from models lacking knowledge, while the latter comes from the inherent properties of samples themselves. Specifically, targeting excluding the undesirable background snippets, we fuse the video-level epistemic and aleatoric uncertainties to measure the interference of background noise to video-level prediction. Then, the snippet-level aleatoric uncertainty is further deduced for progressive mutual learning, which gradually focuses on the entire action instances in an "easy-to-hard" manner and encourages the snippet-level epistemic uncertainty to be complementary with the foreground attention scores. Extensive experiments show that UDEL achieves state-of-the-art performance on four public benchmarks. Our code is available in github/mengyuanchen2021/UDEL.
Texto completo:
1
Coleções:
01-internacional
Base de dados:
MEDLINE
Tipo de estudo:
Clinical_trials
/
Prognostic_studies
Idioma:
En
Revista:
IEEE Trans Pattern Anal Mach Intell
Assunto da revista:
INFORMATICA MEDICA
Ano de publicação:
2023
Tipo de documento:
Article
País de publicação:
Estados Unidos