Learning to Detect Open Classes for Universal Domain Adaptation

Bo Fu, Zhangjie Cao, Mingsheng Long, Jianmin Wang ;

Abstract


Universal domain adaptation (UDA) transfers knowledge between domains without any constraint on the label sets, extending the applicability of domain adaptation in the wild. In UDA, both the source and target label sets may hold individual labels not shared by the other domain. A mph{de facto} challenge of UDA is to classify the target examples in the shared classes against the domain shift. Another more prominent challenge of UDA is to mark the target examples in the target-individual label set (open classes) as ``unknown''. These two entangled challenges make UDA a highly under-explored problem. Previous work on UDA focuses on the classification of data in the shared classes and uses per-class accuracy as the evaluation metric, which is badly biased to the accuracy of shared classes. However, accurately detecting open classes is the mission-critical task to enable real universal domain adaptation. It further turns UDA problem into a well-established close-set domain adaptation problem. Towards accurate open class detection, we propose Calibrated Multiple Uncertainties (CMU) with a novel transferability measure estimated by a mixture of uncertainty quantities in complementation: entropy, confidence and consistency, defined on conditional probabilities calibrated by a multi-classifier ensemble model. The new transferability measure accurately quantifies the inclination of a target example to the open classes. We also propose a novel evaluation metric called H-score, which emphasizes the importance of both accuracies of the shared classes and the ``unknown'' class. Empirical results under the UDA setting show that CMU outperforms the state-of-the-art domain adaptation methods on all the evaluation metrics, especially by a large margin on the H-score."

Related Material


[pdf]