Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
Neural Netw ; 180: 106682, 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39241436

RESUMEN

In unsupervised domain adaptive object detection, learning target-specific features is pivotal in enhancing detector performance. However, previous methods mostly concentrated on aligning domain-invariant features across domains and neglected integrating the specific features. To tackle this issue, we introduce a novel feature learning method called Joint Feature Differentiation and Interaction (JFDI), which significantly boosts the adaptability of the object detector. We construct a dual-path architecture based on we proposed feature differentiate modules: One path, guided by the source domain data, utilizes multiple discriminators to confuse and align domain-invariant features. The other path, specifically tailored to the target domain, learns its distinctive characteristics based on pseudo-labeled target data. Subsequently, we implement an interactive enhanced mechanism between these paths to ensure stable learning of features and mitigate interference from pseudo-label noise during the iterative optimization. Additionally, we devise a hierarchical pseudo-label fusion module that consolidates more comprehensive and reliable results. In addition, we analyze the generalization error bound of JFDI, which provides a theoretical basis for the effectiveness of JFDI. Extensive empirical evaluations across diverse benchmark scenarios demonstrate that our method is advanced and efficient.

2.
Neural Netw ; 164: 617-630, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37245476

RESUMEN

Deep neural networks (DNNs) are prone to the notorious catastrophic forgetting problem when learning new tasks incrementally. Class-incremental learning (CIL) is a promising solution to tackle the challenge and learn new classes while not forgetting old ones. Existing CIL approaches adopted stored representative exemplars or complex generative models to achieve good performance. However, storing data from previous tasks causes memory or privacy issues, and the training of generative models is unstable and inefficient. This paper proposes a method based on multi-granularity knowledge distillation and prototype consistency regularization (MDPCR) that performs well even when the previous training data is unavailable. First, we propose to design knowledge distillation losses in the deep feature space to constrain the incremental model trained on the new data. Thereby, multi-granularity is captured from three aspects: by distilling multi-scale self-attentive features, the feature similarity probability, and global features to maximize the retention of previous knowledge, effectively alleviating catastrophic forgetting. Conversely, we preserve the prototype of each old class and employ prototype consistency regularization (PCR) to ensure that the old prototypes and semantically enhanced prototypes produce consistent prediction, which excels in enhancing the robustness of old prototypes and reduces the classification bias. Extensive experiments on three CIL benchmark datasets confirm that MDPCR performs significantly better over exemplar-free methods and outperforms typical exemplar-based approaches.


Asunto(s)
Benchmarking , Conocimiento , Redes Neurales de la Computación , Privacidad , Probabilidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA