Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Inorg Chem ; 58(9): 5375-5379, 2019 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-30977372

RESUMO

This Communication demonstrates a novel and facial approach to achieving monodispersed sea-urchin-like Pt nanodendrites under a 1 bar hydrogen environment at 165 °C. These Pt nanodendrites can be further used as seeds for the formation of Pt/Au nanodendrites. Both Pt and Pt/Au nanodendrites exhibit the desired eletrocatalytic activities for the methanol oxidation reaction.

2.
Artigo em Inglês | MEDLINE | ID: mdl-39115993

RESUMO

Vision transformer has demonstrated great potential in abundant vision tasks. However, it also inevitably suffers from poor generalization capability when the distribution shift occurs in testing (i.e., out-of-distribution data). To mitigate this issue, we propose a novel method, Semantic-aware Message Broadcasting (SAMB), which enables more informative and flexible feature alignment for unsupervised domain adaptation (UDA). Particularly, we study the attention module in the vision transformer and notice that the alignment space using one global class token lacks enough flexibility, where it interacts information with all image tokens in the same manner but ignores the rich semantics of different regions. In this paper, we aim to improve the richness of the alignment features by enabling semantic-aware adaptive message broadcasting. Particularly, we introduce a group of learned group tokens as nodes to aggregate the global information from all image tokens, but encourage different group tokens to adaptively focus on the message broadcasting to different semantic regions. In this way, our message broadcasting encourages the group tokens to learn more informative and diverse information for effective domain alignment. Moreover, we systematically study the effects of adversarial-based feature alignment (ADA) and pseudo-label based self-training (PST) on UDA. We find that one simple two-stage training strategy with the cooperation of ADA and PST can further improve the adaptation capability of the vision transformer. Extensive experiments on DomainNet, OfficeHome, and VisDA-2017 demonstrate the effectiveness of our methods for UDA.

3.
IEEE Trans Pattern Anal Mach Intell ; 41(8): 1963-1978, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30714909

RESUMO

Skeleton-based human action recognition has recently attracted increasing attention thanks to the accessibility and the popularity of 3D skeleton data. One of the key challenges in action recognition lies in the large variations of action representations when they are captured from different viewpoints. In order to alleviate the effects of view variations, this paper introduces a novel view adaptation scheme, which automatically determines the virtual observation viewpoints over the course of an action in a learning based data driven manner. Instead of re-positioning the skeletons using a fixed human-defined prior criterion, we design two view adaptive neural networks, i.e., VA-RNN and VA-CNN, which are respectively built based on the recurrent neural network (RNN) with the Long Short-term Memory (LSTM) and the convolutional neural network (CNN). For each network, a novel view adaptation module learns and determines the most suitable observation viewpoints, and transforms the skeletons to those viewpoints for the end-to-end recognition with a main classification network. Ablation studies find that the proposed view adaptive models are capable of transforming the skeletons of various views to much more consistent virtual viewpoints. Therefore, the models largely eliminate the influence of the viewpoints, enabling the networks to focus on the learning of action-specific features and thus resulting in superior performance. In addition, we design a two-stream scheme (referred to as VA-fusion) that fuses the scores of the two networks to provide the final prediction, obtaining enhanced performance. Moreover, random rotation of skeleton sequences is employed to improve the robustness of view adaptation models and alleviate overfitting during training. Extensive experimental evaluations on five challenging benchmarks demonstrate the effectiveness of the proposed view-adaptive networks and superior performance over state-of-the-art approaches.

4.
Artigo em Inglês | MEDLINE | ID: mdl-31484119

RESUMO

Recurrent neural networks (RNNs) are capable of modeling temporal dependencies of complex sequential data. In general, current available structures of RNNs tend to concentrate on controlling the contributions of current and previous information. However, the exploration of different importance levels of different elements within an input vector is always ignored. We propose a simple yet effective Element-wise-Attention Gate (EleAttG), which can be easily added to an RNN block (e.g. all RNN neurons in an RNN layer), to empower the RNN neurons to have attentiveness capability. For an RNN block, an EleAttG is used for adaptively modulating the input by assigning different levels of importance, i.e., attention, to each element/dimension of the input. We refer to an RNN block equipped with an EleAttG as an EleAtt-RNN block. Instead of modulating the input as a whole, the EleAttG modulates the input at fine granularity, i.e., element-wise, and the modulation is content adaptive. The proposed EleAttG, as an additional fundamental unit, is general and can be applied to any RNN structures, e.g., standard RNN, Long Short-Term Memory (LSTM), or Gated Recurrent Unit (GRU). We demonstrate the effectiveness of the proposed EleAtt-RNN by applying it to different tasks including the action recognition, from both skeleton-based data and RGB videos, gesture recognition, and sequential MNIST classification. Experiments show that adding attentiveness through EleAttGs to RNN blocks significantly improves the power of RNNs.

5.
IEEE Trans Image Process ; 27(7): 3459-3471, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29671746

RESUMO

Human action analytics has attracted a lot of attention for decades in computer vision. It is important to extract discriminative spatio-temporal features to model the spatial and temporal evolutions of different actions. In this paper, we propose a spatial and temporal attention model to explore the spatial and temporal discriminative features for human action recognition and detection from skeleton data. We build our networks based on the recurrent neural networks with long short-term memory units. The learned model is capable of selectively focusing on discriminative joints of skeletons within each input frame and paying different levels of attention to the outputs of different frames. To ensure effective training of the network for action recognition, we propose a regularized cross-entropy loss to drive the learning process and develop a joint training strategy accordingly. Moreover, based on temporal attention, we develop a method to generate the action temporal proposals for action detection. We evaluate the proposed method on the SBU Kinect Interaction data set, the NTU RGB + D data set, and the PKU-MMD data set, respectively. Experiment results demonstrate the effectiveness of our proposed model on both action recognition and action detection.

6.
R Soc Open Sci ; 5(7): 180282, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30109084

RESUMO

Nanostructured carbon black (CB) was first employed directly in this paper for the simultaneous electrochemical determination of trace Pb(II) and Cd(II) using differential pulse anodic stripping voltammetry. The morphology and surface properties of conductive CB were characterized by transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy, ultraviolet-visible spectroscopy and Raman spectroscopy. Special pore structures, as well as surface chemical functional groups, endow CB with excellent catalytic and adsorption properties. Some parameters affecting electrical analysis performance were investigated systematically including deposition time and potential, pH value of solution, volume of suspension, amount of Bi(III) and Nafion solution. CB-Nafion-glassy carbon electrode sensor linear response ranges from 6 to 1000 nM for selective and simultaneous determination. The detection limits were calculated to be 8 nM (0.9 µg l-1) for Cd(II) and 5 nM (1.0 µg l-1) for Pb(II) (S/N = 3) for the electrocatalytic determination under optimized conditions. The method was successfully used to the determination of actual samples and good recovery was achieved from different spiked samples. Low detection limits and good stability of the modified electrode demonstrated a promising perspective for the detection of trace metal ions in practical application.

7.
IEEE Trans Image Process ; 19(4): 946-57, 2010 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-20007048

RESUMO

Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA