Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 45(5): 6196-6213, 2023 May.
Article in English | MEDLINE | ID: mdl-36260584

ABSTRACT

High-quality 4D reconstruction of human performance with complex interactions to various objects is essential in real-world scenarios, which enables numerous immersive VR/AR applications. However, recent advances still fail to provide reliable performance reconstruction, suffering from challenging interaction patterns and severe occlusions, especially for the monocular setting. To fill this gap, in this paper, we propose RobustFusion, a robust volumetric performance reconstruction system for human-object interaction scenarios using only a single RGBD sensor, which combines various data-driven visual and interaction cues to handle the complex interaction patterns and severe occlusions. We propose a semantic-aware scene decoupling scheme to model the occlusions explicitly, with a segmentation refinement and robust object tracking to prevent disentanglement uncertainty and maintain temporal consistency. We further introduce a robust performance capture scheme with the aid of various data-driven cues, which not only enables re-initialization ability, but also models the complex human-object interaction patterns in a data-driven manner. To this end, we introduce a spatial relation prior to prevent implausible intersections, as well as data-driven interaction cues to maintain natural motions, especially for those regions under severe human-object occlusions. We also adopt an adaptive fusion scheme for temporally coherent human-object reconstruction with occlusion analysis and human parsing cue. Extensive experiments demonstrate the effectiveness of our approach to achieve high-quality 4D human performance reconstruction under complex human-object interactions whilst still maintaining the lightweight monocular setting.

2.
IEEE Trans Image Process ; 31: 2257-2267, 2022.
Article in English | MEDLINE | ID: mdl-35235510

ABSTRACT

Scene Representation Networks (SRN) have been proven as a powerful tool for novel view synthesis in recent works. They learn a mapping function from the world coordinates of spatial points to radiance color and the scene's density using a fully connected network. However, scene texture contains complex high-frequency details in practice that is hard to be memorized by a network with limited parameters, leading to disturbing blurry effects when rendering novel views. In this paper, we propose to learn 'residual color' instead of 'radiance color' for novel view synthesis, i.e., the residuals between surface color and reference color. Here the reference color is calculated based on spatial color priors, which are extracted from input view observations. The beauty of such a strategy lies in that the residuals between radiance color and reference are close to zero for most spatial points thus are easier to learn. A novel view synthesis system that learns the residual color using SRN is presented in this paper. Experiments on public datasets demonstrate that the proposed method achieves competitive performance in preserving high-resolution details, leading to visually more pleasant results than the state of the arts.

3.
IEEE Trans Pattern Anal Mach Intell ; 44(3): 1519-1533, 2022 Mar.
Article in English | MEDLINE | ID: mdl-32877330

ABSTRACT

High-quality reconstruction of 3D geometry and texture plays a vital role in providing immersive perception of the real world. Additionally, online computation enables the practical usage of 3D reconstruction for interaction. We present an RGBD-based globally-consistent dense 3D reconstruction approach, where high-quality (i.e., the spatial resolution of the RGB image) texture patches are mapped on high-resolution ([Formula: see text]) geometric models online. The whole pipeline uses merely the CPU computing of a portable device. For real-time geometric reconstruction with online texturing, we propose to solve the texture optimization problem with a simplified incremental MRF solver in the context of geometric reconstruction pipeline using sparse voxel sampling strategy. An efficient reference-based color adjustment scheme is also proposed to achieve consistent texture patch colors under inconsistent luminance situations. Quantitative and qualitative experiments demonstrate that our online scheme achieves a realistic visualization of the environment with more abundant details, while taking fairly compact memory consumption and much lower computational complexity than existing solutions.

SELECTION OF CITATIONS
SEARCH DETAIL