Video Salient Object Detection Using Spatiotemporal Deep Features.
IEEE Trans Image Process
; 27(10): 5002-5015, 2018 Oct.
Article
in En
| MEDLINE
| ID: mdl-29985139
This paper presents a method for detecting salient objects in videos, where temporal information in addition to spatial information is fully taken into account. Following recent reports on the advantage of deep features over conventional handcrafted features, we propose a new set of spatiotemporal deep (STD) features that utilize local and global contexts over frames. We also propose new spatiotemporal conditional random field (STCRF) to compute saliency from STD features. STCRF is our extension of CRF to the temporal domain and describes the relationships among neighboring regions both in a frame and over frames. STCRF leads to temporally consistent saliency maps over frames, contributing to accurate detection of salient objects' boundaries and noise reduction during detection. Our proposed method first segments an input video into multiple scales and then computes a saliency map at each scale level using STD features with STCRF. The final saliency map is computed by fusing saliency maps at different scale levels. Our experiments, using publicly available benchmark datasets, confirm that the proposed method significantly outperforms the state-of-the-art methods. We also applied our saliency computation to the video object segmentation task, showing that our method outperforms existing video object segmentation methods.
Full text:
1
Collection:
01-internacional
Database:
MEDLINE
Type of study:
Diagnostic_studies
Language:
En
Journal:
IEEE Trans Image Process
Journal subject:
INFORMATICA MEDICA
Year:
2018
Document type:
Article
Country of publication:
Estados Unidos