Minimum Class Confusion for Versatile Domain Adaptation
Ying Jin, Ximei Wang, Mingsheng Long(), Jianmin Wang
Domain Adaptation (DA) transfers a learning model from a labeled source domain to an unlabeled target domain which follows different distributions. There are a variety of DA scenarios subject to label sets and domain configurations, including closed-set and partial-set DA, as well as multi-source and multi-target DA. It is notable that existing DA methods are generally designed only for a specific scenario, and may underperform for scenarios they are not tailored to. In this paper, we propose Versatile Domain Adaptation (VDA), where one method can handle several different scenarios at the same time. Towards handling it, a more general inductive bias other than the domain alignment should be explored. We delve into a missing piece of existing methods: class confusion, the tendency that a classifier confuses the predictions between the correct and ambiguous classes for target examples, which exists in different scenarios. We unveil that reducing such pair-wise class confusion brings about significant transfer gains. Based on this, we propose a general loss function: Minimum Class Confusion (MCC). It can be characterized by (1) a non-adversarial DA method without explicitly deploying domain alignment, enjoying fast convergence speed; (2) a versatile approach that can handle the four existing scenarios: Closed-Set, Partial-Set, Multi-Source, and Multi-Target DA, outperforming the state-of-the-art methods in these scenarios, especially on the largest and hardest dataset to date (7.3% on DomainNet). Strong performance in the two scenarios proposed in this paper: Multi-Source Partial and Multi-Target Partial DA, further proves its versatility. In addition, it can also be used as a general regularizer that is orthogonal and complementary to a variety of existing DA methods, accelerating convergence and pushing those readily competitive methods to a stronger level. Code is released in https://github.com/thuml/MCC."