Partial Adversarial Domain Adaptation

Zhangjie Cao, Lijia Ma, Mingsheng Long, Jianmin Wang; The European Conference on Computer Vision (ECCV), 2018, pp. 135-150

Abstract


Domain adversarial learning aligns the feature distributions across the source and target domains in a two-player minimax game. Existing domain adversarial networks generally assume identical label space across different domains. In the presence of big data, there is strong motivation of transferring deep models from existing big domains to unknown small domains. This paper introduces partial domain adaptation as a new domain adaptation scenario, which relaxes the fully shared label space assumption to that the source label space subsumes the target label space. Previous methods typically match the whole source domain to the target domain, which are vulnerable to negative transfer for the partial domain adaptation problem due to the large mismatch between label spaces. We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space. Experiments show that PADA exceeds state-of-the-art results for partial domain adaptation tasks on several datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Cao_2018_ECCV,
author = {Cao, Zhangjie and Ma, Lijia and Long, Mingsheng and Wang, Jianmin},
title = {Partial Adversarial Domain Adaptation},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}