Rethinking Robust Representation Learning under Fine-Grained Noisy Faces

Bingqi Ma, Guanglu Song, Boxiao Liu, Yu Liu ;

Abstract


"Learning robust feature representation from large-scale noisy faces stands out as one of the key challenges in high-performance face recognition. Recent attempts have been made to cope with this challenge by alleviating the intra-class conflict and inter-class conflict. However, the unconstrained noise type in each conflict still makes it difficult for these algorithms to perform well. To better understand this, we reformulate the noise type of faces in each class with a more fine-grained manner as N-identities|K-clusters|C-conflicts. Different types of noisy faces can be generated by adjusting the values of N, K, and C. Based on this unified formulation, we found that the main barrier behind the noise-robust representation learning is the flexibility of the algorithm under different N, K, and C. For this potential problem, we constructively propose a new method, named Evolving Sub-centers Learning (ESL), to find optimal hyperplanes to accurately describe the latent space of massive noisy faces. More specifically, we initialize M sub-centers for each class and ESL encourages it to be automatically aligned to N-identities|K-clusters|C-conflicts faces via producing, merging, and dropping operations. Images belonging to the same identity in noisy faces can effectively converge to the same sub-center and samples with different identities will be pushed away. We inspect its effectiveness with an elaborate ablation study on synthetic noisy datasets different N, K, and C. Without any bells and whistles, ESL can achieve significant performance gains over state-of-the-art methods in large-scale noisy faces."

Related Material


[pdf] [supplementary material] [DOI]