Your browser doesn't support javascript.
loading
Semantic and Geometric-Aware Day-to-Night Image Translation Network.
Bang, Geonkyu; Lee, Jinho; Endo, Yuki; Nishimori, Toshiaki; Nakao, Kenta; Kamijo, Shunsuke.
Afiliação
  • Bang G; Emerging Design and Informatics Course, Graduate School of Interdisciplinary Information Studies, The University of Tokyo, 4 Chome-6-1 Komaba, Meguro-ku, Tokyo 153-0041, Japan.
  • Lee J; Emerging Design and Informatics Course, Graduate School of Interdisciplinary Information Studies, The University of Tokyo, 4 Chome-6-1 Komaba, Meguro-ku, Tokyo 153-0041, Japan.
  • Endo Y; Department of Information and Communication Engineering, Graduate School of Information Science and Technology, The University of Tokyo, 7 Chome-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan.
  • Nishimori T; Mitsubishi Heavy Industries Machinery Systems, Ltd., 1 Chome-1-1 Wadasaki-cho, Hyogo-ku, Kobe 652-8585, Japan.
  • Nakao K; Mitsubishi Heavy Industries, Ltd., 1 Chome-1-1 Wadasaki-cho, Hyogo-ku, Kobe 652-8585, Japan.
  • Kamijo S; Institute of Industrial Science, The University of Tokyo, 4 Chome-6-1 Komaba, Meguro-ku, Tokyo 153-0041, Japan.
Sensors (Basel) ; 24(4)2024 Feb 19.
Article em En | MEDLINE | ID: mdl-38400497
ABSTRACT
Autonomous driving systems heavily depend on perception tasks for optimal performance. However, the prevailing datasets are primarily focused on scenarios with clear visibility (i.e., sunny and daytime). This concentration poses challenges in training deep-learning-based perception models for environments with adverse conditions (e.g., rainy and nighttime). In this paper, we propose an unsupervised network designed for the translation of images from day-to-night to solve the ill-posed problem of learning the mapping between domains with unpaired data. The proposed method involves extracting both semantic and geometric information from input images in the form of attention maps. We assume that the multi-task network can extract semantic and geometric information during the estimation of semantic segmentation and depth maps, respectively. The image-to-image translation network integrates the two distinct types of extracted information, employing them as spatial attention maps. We compare our method with related works both qualitatively and quantitatively. The proposed method shows both qualitative and qualitative improvements in visual presentation over related work.
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article