Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
IEEE Trans Image Process ; 32: 5779-5793, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37847621

RESUMEN

By exploring the localizable representations in deep CNN, weakly supervised object localization (WSOL) methods could determine the position of the object in each image just trained by the classification task. However, the partial activation problem caused by the discriminant function makes the network unable to locate objects accurately. To alleviate this problem, we propose Structure-Preserved Attention Activated Network (SPA2Net), a simple and effective one-stage WSOL framework to explore the ability of structure preservation of deep features. Different from traditional WSOL approaches, we decouple the object localization task from the classification branch to reduce their mutual influence by involving a localization branch which is online refined by a self-supervised structural-preserved localization mask. Specifically, we employ the high-order self-correlation as structural prior to enhance the perception of spatial interaction within convolutional features. By succinctly combining the structural prior with spatial attention, activations by SPA2Net will spread from part to the whole object during training. To avoid the structure-missing issue caused by the classification network, we furthermore utilize the restricted activation loss (RAL) to distinguish the difference between foreground and background in the channel dimension. In conjunction with the self-supervised localization branch, SPA2Net can directly predict the class-irrelevant localization map while prompting the network to pay more attention to the target region for accurate localization. Extensive experiments on two publicly available benchmarks, including CUB-200-2011 and ILSVRC, show that our SPA2Net achieves substantial and consistent performance gains compared with baseline approaches. The code and models are available at https://github.com/MsterDC/SPA2Net.

2.
Artículo en Inglés | MEDLINE | ID: mdl-36417732

RESUMEN

Weakly supervised object localization (WSOL), which trains object localization models using solely image category annotations, remains a challenging problem. Existing approaches based on convolutional neural networks (CNNs) tend to miss full object extent while activating discriminative object parts. Based on our analysis, this is caused by CNN's intrinsic characteristics, which experiences difficulty to capture object semantics at long distances. In this article, we introduce the vision transformer to WSOL, with the aim to capture long-range semantic dependency of features by leveraging transformer's cascaded self-attention mechanism. We propose the token semantic coupled attention map (TS-CAM) method, which first decomposes class-aware semantics and then couples the semantics with attention maps for semantic-aware activation. To capture object semantics at long distances and avoid partial activation, TS-CAM performs spatial embedding by partitioning an image to a set of patch tokens. To incorporate object category information to patch tokens, TS-CAM reallocates category-related semantics to each patch token. The patch tokens are finally coupled with attention maps which are semantic-agnostic to perform semantic-aware object localization. By introducing semantic tokens to produce semantic-aware attention maps, we further explore the capability of TS-CAM for multicategory object localization. Experiments show that TS-CAM outperforms its CNN-CAM counterpart by 11.6% and 28.9% on ILSVRC and CUB-200-2011 datasets, respectively, improving the state-of-the-art with large margins. TS-CAM also demonstrates superiority for multicategory object localization on the Pascal VOC dataset. The code is available at github.com/yuanyao366/ts-cam-extension.

3.
IEEE Trans Vis Comput Graph ; 27(4): 2298-2312, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-31647438

RESUMEN

With the surge of images in the information era, people demand an effective and accurate way to access meaningful visual information. Accordingly, effective and accurate communication of information has become indispensable. In this article, we propose a content-based approach that automatically generates a clear and informative visual summarization based on design principles and cognitive psychology to represent image collections. We first introduce a novel method to make representative and nonredundant summarizations of image collections, thereby ensuring data cleanliness and emphasizing important information. Then, we propose a tree-based algorithm with a two-step optimization strategy to generate the final layout that operates as follows: (1) an initial layout is created by constructing a tree randomly based on the grouping results of the input image set; (2) the layout is refined through a coarse adjustment in a greedy manner, followed by gradient back propagation drawing on the training procedure of neural networks. We demonstrate the usefulness and effectiveness of our method via extensive experimental results and user studies. Our visual summarization algorithm can precisely and efficiently capture the main content of image collections better than alternative methods or commercial tools.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...