Your browser doesn't support javascript.
loading
Reducing bias to source samples for unsupervised domain adaptation.
Ye, Yalan; Huang, Ziwei; Pan, Tongjie; Li, Jingjing; Shen, Heng Tao.
Afiliação
  • Ye Y; School of Computer Science and Engineering, University of Electronic Science and Technology of China, China.
  • Huang Z; School of Computer Science and Engineering, University of Electronic Science and Technology of China, China.
  • Pan T; School of Computer Science and Engineering, University of Electronic Science and Technology of China, China.
  • Li J; School of Computer Science and Engineering, University of Electronic Science and Technology of China, China. Electronic address: lijin117@yeah.net.
  • Shen HT; School of Computer Science and Engineering, University of Electronic Science and Technology of China, China.
Neural Netw ; 141: 61-71, 2021 Sep.
Article em En | MEDLINE | ID: mdl-33866303
Unsupervised Domain Adaptation (UDA) makes predictions for the target domain data while labels are only available in the source domain. Lots of works in UDA focus on finding a common representation of the two domains via domain alignment, assuming that a classifier trained in the source domain can be generalized well to the target domain. Thus, most existing UDA methods only consider minimizing the domain discrepancy without enforcing any constraint on the classifier. However, due to the uniqueness of each domain, it is difficult to achieve a perfect common representation, especially when there is low similarity between the source domain and the target domain. As a consequence, the classifier is biased to the source domain features and makes incorrect predictions on the target domain. To address this issue, we propose a novel approach named reducing bias to source samples for unsupervised domain adaptation (RBDA) by jointly matching the distribution of the two domains and reducing the classifier's bias to source samples. Specifically, RBDA first conditions the adversarial networks with the cross-covariance of learned features and classifier predictions to match the distribution of two domains. Then to reduce the classifier's bias to source samples, RBDA is designed with three effective mechanisms: a mean teacher model to guide the training of the original model, a regularization term to regularize the model and an improved cross-entropy loss for better supervised information learning. Comprehensive experiments on several open benchmarks demonstrate that RBDA achieves state-of-the-art results, which show its effectiveness for unsupervised domain adaptation scenarios.
Assuntos
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2021 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Assunto principal: Aprendizado Profundo Tipo de estudo: Prognostic_studies Idioma: En Ano de publicação: 2021 Tipo de documento: Article