Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 11184-11202, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37074900

ABSTRACT

Representing multimodal behaviors is a critical challenge for pedestrian trajectory prediction. Previous methods commonly represent this multimodality with multiple latent variables repeatedly sampled from a latent space, encountering difficulties in interpretable trajectory prediction. Moreover, the latent space is usually built by encoding global interaction into future trajectory, which inevitably introduces superfluous interactions and thus leads to performance reduction. To tackle these issues, we propose a novel Interpretable Multimodality Predictor (IMP) for pedestrian trajectory prediction, whose core is to represent a specific mode by its mean location. We model the distribution of mean location as a Gaussian Mixture Model (GMM) conditioned on sparse spatio-temporal features, and sample multiple mean locations from the decoupled components of GMM to encourage multimodality. Our IMP brings four-fold benefits: 1) Interpretable prediction to provide semantics about the motion behavior of a specific mode; 2) Friendly visualization to present multimodal behaviors; 3) Well theoretical feasibility to estimate the distribution of mean locations supported by the central-limit theorem; 4) Effective sparse spatio-temporal features to reduce superfluous interactions and model temporal continuity of interaction. Extensive experiments validate that our IMP not only outperforms state-of-the-art methods but also can achieve a controllable prediction by customizing the corresponding mean location.

2.
IEEE Trans Vis Comput Graph ; 29(12): 5556-5568, 2023 Dec.
Article in English | MEDLINE | ID: mdl-36367917

ABSTRACT

3D scene graph generation (SGG) has been of high interest in computer vision. Although the accuracy of 3D SGG on coarse classification and single relation label has been gradually improved, the performance of existing works is still far from being perfect for fine-grained and multi-label situations. In this article, we propose a framework fully exploring contextual information for the 3D SGG task, which attempts to satisfy the requirements of fine-grained entity class, multiple relation labels, and high accuracy simultaneously. Our proposed approach is composed of a Graph Feature Extraction module and a Graph Contextual Reasoning module, achieving appropriate information-redundancy feature extraction, structured organization, and hierarchical inferring. Our approach achieves superior or competitive performance over previous methods on the 3DSSG dataset, especially on the relationship prediction sub-task.

3.
IEEE Trans Image Process ; 30: 3499-3512, 2021.
Article in English | MEDLINE | ID: mdl-33667160

ABSTRACT

Automatic image visual recognition can make full use of largely available images with text descriptions on social media platforms to build large-scale image labeled datasets. In this paper, we propose a novel visual text representation, named DG-VRT (Diverse GAN-Visual Representation on Text), which extracts visual features from synthetic images generated by a diverse conditional Generative Adversarial Network (DCGAN) on the text, for visual recognition. The DCGAN incorporates the current state-of-the-art text-to-image GANs and generates multiple synthetic images with various prior noises conditioned on a text. Then we extract deep visual features from the generated synthetic images to explore the underlying visual concepts and provide a visual transformation on text in feature space. Finally, we combine image-level visual features, text-level features and visual features based on synthetic images together to recognize the images, and we also extend the proposed work to semantic segmentation. We conduct extensive experiments on two benchmark datasets and the experimental results demonstrate the efficacy of our proposed representation on text for visual recognition.

4.
IEEE Trans Pattern Anal Mach Intell ; 40(3): 582-594, 2018 03.
Article in English | MEDLINE | ID: mdl-28320651

ABSTRACT

Active learning is an effective way of engaging users to interactively train models for visual recognition more efficiently. The vast majority of previous works focused on active learning with a single human oracle. The problem of active learning with multiple oracles in a collaborative setting has not been well explored. We present a collaborative computational model for active learning with multiple human oracles, the input from whom may possess different levels of noises. It leads to not only an ensemble kernel machine that is robust to label noises, but also a principled label quality measure to online detect irresponsible labelers. Instead of running independent active learning processes for each individual human oracle, our model captures the inherent correlations among the labelers through shared data among them. Our experiments with both simulated and real crowd-sourced noisy labels demonstrate the efficacy of our model.


Subject(s)
Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated , Supervised Machine Learning , Algorithms , Crowding , Female , Humans , Male
5.
Int J Comput Vis ; 116(2): 136-160, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26924892

ABSTRACT

We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency.

SELECTION OF CITATIONS
SEARCH DETAIL
...