Your browser doesn't support javascript.
loading
Content preserving image translation with texture co-occurrence and spatial self-similarity for texture debiasing and domain adaptation.
Kang, Myeongkyun; Won, Dongkyu; Luna, Miguel; Chikontwe, Philip; Hong, Kyung Soo; Ahn, June Hong; Park, Sang Hyun.
Afiliación
  • Kang M; Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea; Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA.
  • Won D; Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea.
  • Luna M; Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea.
  • Chikontwe P; Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea.
  • Hong KS; Division of Pulmonology and Allergy, Department of Internal Medicine, Regional Center for Respiratory Diseases, Yeungnam University Medical Center, College of Medicine, Yeungnam University, Daegu, South Korea.
  • Ahn JH; Division of Pulmonology and Allergy, Department of Internal Medicine, Regional Center for Respiratory Diseases, Yeungnam University Medical Center, College of Medicine, Yeungnam University, Daegu, South Korea.
  • Park SH; Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea. Electronic address: shpark13135@dgist.ac.kr.
Neural Netw ; 166: 722-737, 2023 Sep.
Article en En | MEDLINE | ID: mdl-37607423
ABSTRACT
Models trained on datasets with texture bias usually perform poorly on out-of-distribution samples since biased representations are embedded into the model. Recently, various image translation and debiasing methods have attempted to disentangle texture biased representations for downstream tasks, but accurately discarding biased features without altering other relevant information is still challenging. In this paper, we propose a novel framework that leverages image translation to generate additional training images using the content of a source image and the texture of a target image with a different bias property to explicitly mitigate texture bias when training a model on a target task. Our model ensures texture similarity between the target and generated images via a texture co-occurrence loss while preserving content details from source images with a spatial self-similarity loss. Both the generated and original training images are combined to train improved classification or segmentation models robust to inconsistent texture bias. Evaluation on five classification- and two segmentation-datasets with known texture biases demonstrates the utility of our method, and reports significant improvements over recent state-of-the-art methods in all cases.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2023 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2023 Tipo del documento: Article País de afiliación: Estados Unidos
...