Your browser doesn't support javascript.
loading
E2LNet: An Efficient and Effective Lightweight Network for Panoramic Depth Estimation.
Xu, Jiayue; Zhao, Jianping; Li, Hua; Han, Cheng; Xu, Chao.
Afiliação
  • Xu J; School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.
  • Zhao J; School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.
  • Li H; School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.
  • Han C; School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.
  • Xu C; School of Computer Science and Technology, Changchun University of Science and Technology, Changchun 130022, China.
Sensors (Basel) ; 23(22)2023 Nov 16.
Article em En | MEDLINE | ID: mdl-38005604
ABSTRACT
Monocular panoramic depth estimation has various applications in robotics and autonomous driving due to its ability to perceive the entire field of view. However, panoramic depth estimation faces two significant challenges global context capturing and distortion awareness. In this paper, we propose a new framework for panoramic depth estimation that can simultaneously address panoramic distortion and extract global context information, thereby improving the performance of panoramic depth estimation. Specifically, we introduce an attention mechanism into the multi-scale dilated convolution and adaptively adjust the receptive field size between different spatial positions, designing the adaptive attention dilated convolution module, which effectively perceives distortion. At the same time, we design the global scene understanding module to integrate global context information into the feature maps generated using the feature extractor. Finally, we trained and evaluated our model on three benchmark datasets which contains the virtual and real-world RGB-D panorama datasets. The experimental results show that the proposed method achieves competitive performance, comparable to existing techniques in both quantitative and qualitative evaluations. Furthermore, our method has fewer parameters and more flexibility, making it a scalable solution in mobile AR.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2023 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2023 Tipo de documento: Article País de afiliação: China