Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39042533

RESUMEN

Active learning (AL) is to design label-efficient algorithms by labeling the most representative samples. It reduces annotation cost and attracts increasing attention from the community. However, previous AL methods suffer from the inadequacy of annotations and unreliable uncertainty estimation. Moreover, we find that they ignore the intra-diversity of selected samples, which leads to sampling redundancy. In view of these challenges, we propose an inductive state-relabeling adversarial AL model (ISRA) that consists of a unified representation generator, an inductive state-relabeling discriminator, and a heuristic clique rescaling module. The generator introduces contrastive learning to leverage unlabeled samples for self-supervised training, where the mutual information is utilized to improve the representation quality for AL selection. Then, we design an inductive uncertainty indicator to learn the state score from labeled data and relabel unlabeled data with different importance for better discrimination of instructive samples. To solve the problem of sampling redundancy, the heuristic clique rescaling module measures the intra-diversity of candidate samples and recurrently rescales them to select the most informative samples. The experiments conducted on eight datasets and two imbalanced scenarios show that our model outperforms the previous state-of-the-art AL methods. As an extension on the cross-modal AL task, we apply ISRA to the image captioning and it also achieves superior performance.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38032778

RESUMEN

Multilabel image recognition (MLR) aims to annotate an image with comprehensive labels and suffers from object occlusion or small object sizes within images. Although the existing works attempt to capture and exploit label correlations to tackle these issues, they predominantly rely on global statistical label correlations as prior knowledge for guiding label prediction, neglecting the unique label correlations present within each image. To overcome this limitation, we propose a semantic and correlation disentangled graph convolution (SCD-GC) method, which builds the image-specific graph and employs graph propagation to reason the labels effectively. Specifically, we introduce a semantic disentangling module to extract categorywise semantic features as graph nodes and develop a correlation disentangling module to extract image-specific label correlations as graph edges. Performing graph convolutions on this image-specific graph allows for better mining of difficult labels with weak visual representations. Visualization experiments reveal that our approach successfully disentangles the dominant label correlations existing within the input image. Through extensive experimentation, we demonstrate that our method achieves superior results on the challenging Microsoft COCO (MS-COCO), PASCAL visual object classes (PASCAL-VOC), NUS web image dataset (NUS-WIDE), and Visual Genome 500 (VG-500) datasets. Code is available at GitHub: https://github.com/caigitrepo/SCDGC.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA