Your browser doesn't support javascript.
loading
Self-Supervised monocular depth and ego-Motion estimation in endoscopy: Appearance flow to the rescue.
Shao, Shuwei; Pei, Zhongcai; Chen, Weihai; Zhu, Wentao; Wu, Xingming; Sun, Dianmin; Zhang, Baochang.
Afiliação
  • Shao S; School of Automation Science and Electrical Engineering, Beihang University, Beijing, China.
  • Pei Z; School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China.
  • Chen W; School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China. Electronic address: whchen@buaa.edu.cn.
  • Zhu W; Kuaishou Technology, USA.
  • Wu X; School of Automation Science and Electrical Engineering, Beihang University, Beijing, China.
  • Sun D; Shandong Cancer Hospital Affiliated to Shandong University, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China.
  • Zhang B; Institute of Artificial Intelligence, Beihang University, Beijing, China. Electronic address: bczhang@buaa.edu.cn.
Med Image Anal ; 77: 102338, 2022 04.
Article em En | MEDLINE | ID: mdl-35016079
Recently, self-supervised learning technology has been applied to calculate depth and ego-motion from monocular videos, achieving remarkable performance in autonomous driving scenarios. One widely adopted assumption of depth and ego-motion self-supervised learning is that the image brightness remains constant within nearby frames. Unfortunately, the endoscopic scene does not meet this assumption because there are severe brightness fluctuations induced by illumination variations, non-Lambertian reflections and interreflections during data collection, and these brightness fluctuations inevitably deteriorate the depth and ego-motion estimation accuracy. In this work, we introduce a novel concept referred to as appearance flow to address the brightness inconsistency problem. The appearance flow takes into consideration any variations in the brightness pattern and enables us to develop a generalized dynamic image constraint. Furthermore, we build a unified self-supervised framework to estimate monocular depth and ego-motion simultaneously in endoscopic scenes, which comprises a structure module, a motion module, an appearance module and a correspondence module, to accurately reconstruct the appearance and calibrate the image brightness. Extensive experiments are conducted on the SCARED dataset and EndoSLAM dataset, and the proposed unified framework exceeds other self-supervised approaches by a large margin. To validate our framework's generalization ability on different patients and cameras, we train our model on SCARED but test it on the SERV-CT and Hamlyn datasets without any fine-tuning, and the superior results reveal its strong generalization ability. Code is available at: https://github.com/ShuweiShao/AF-SfMLearner.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Endoscopia Gastrointestinal / Ego Idioma: En Ano de publicação: 2022 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Endoscopia Gastrointestinal / Ego Idioma: En Ano de publicação: 2022 Tipo de documento: Article