Your browser doesn't support javascript.
loading
Unsupervised 3D Reconstruction with Multi-Measure and High-Resolution Loss.
Zheng, Yijie; Luo, Jianxin; Chen, Weiwei; Zhang, Yanyan; Sun, Haixun; Pan, Zhisong.
Afiliação
  • Zheng Y; College of Command and Control Engineering, Army Engineering University of PLA, Nanjing 210007, China.
  • Luo J; College of Command and Control Engineering, Army Engineering University of PLA, Nanjing 210007, China.
  • Chen W; College of Command and Control Engineering, Army Engineering University of PLA, Nanjing 210007, China.
  • Zhang Y; College of Command and Control Engineering, Army Engineering University of PLA, Nanjing 210007, China.
  • Sun H; College of Command and Control Engineering, Army Engineering University of PLA, Nanjing 210007, China.
  • Pan Z; College of Command and Control Engineering, Army Engineering University of PLA, Nanjing 210007, China.
Sensors (Basel) ; 23(1)2022 Dec 23.
Article em En | MEDLINE | ID: mdl-36616737
Multi-view 3D reconstruction technology based on deep learning is developing rapidly. Unsupervised learning has become a research hotspot because it does not need ground truth labels. The current unsupervised method mainly uses 3DCNN to regularize the cost volume to regression image depth. This approach results in high memory requirements and long computing time. In this paper, we propose an end-to-end unsupervised multi-view 3D reconstruction network framework based on PatchMatch, Unsup_patchmatchnet. It dramatically reduces memory requirements and computing time. We propose a feature point consistency loss function. We incorporate various self-supervised signals such as photometric consistency loss and semantic consistency loss into the loss function. At the same time, we propose a high-resolution loss method. This improves the reconstruction of high-resolution images. The experiment proves that the memory usage of the network is reduced by 80% and the running time is reduced by more than 50% compared with the network using 3DCNN method. The overall error of reconstructed 3D point cloud is only 0.501 mm. It is superior to most current unsupervised multi-view 3D reconstruction networks. Then, we test on different data sets and verify that the network has good generalization.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2022 Tipo de documento: Article País de afiliação: China

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2022 Tipo de documento: Article País de afiliação: China