Relative Contrastive Loss for Unsupervised Representation Learning

Shixiang Tang, Feng Zhu, Lei Bai, Rui Zhao, Wanli Ouyang ;

Abstract


"Defining positive and negative samples is critical for learning visual variations of the semantic classes in an unsupervised manner. Previous methods either construct positive sample pairs as different data augmentations on the same image (i.e., single-instance-positive) or estimate a class prototype by clustering (i.e., prototype-positive), both ignoring the relative nature of positive/negative concepts in the real world. Motivated by the ability of humans in recognizing relatively positive/negative samples, we propose the Relative Contrastive Loss (RCL) to learn feature representation from relatively positive/negative pairs, which not only learns more real world semantic variations than the single-instance-positive methods but also respects positive-negative relativeness compared with absolute prototype-positive methods. The proposed RCL improves the linear evaluation for MoCo v3 by \textbf{+2.0\%} on ImageNet. Code will be released publicly upon acceptance."

Related Material


[pdf] [supplementary material] [DOI]