Posterior Refinement on Metric Matrix Improves Generalization Bound in Metric Learning

Mingda Wang, Canqian Yang, Yi Xu ;

Abstract


"Deep metric learning (DML) attempts to learn a representation model as well as a metric function with a limited generalization gap, so that the model trained on finite known data can achieve similitude performance on infinite unseen data. While considerable efforts have been made to bound the generalization gap by enhancing the model architecture and training protocol a priori in the training phase, none of them notice that a lightweight posterior refinement operation on the trained metric matrix can significantly improve the generalization ability. In this paper, we attempt to fill up this research gap and theoretically analyze the impact of the refined metric matrix property on the generalization gap. Based on our theory, two principles, which suggest a smaller trace or a smaller Frobenius norm of the refined metric matrix, are proposed as guidance for the posterior refinement operation. Experiments on three benchmark datasets verify the correctness of our principles and demonstrate that a pluggable posterior refinement operation is potential to significantly improve the performance of existing models with negligible extra computation burden."

Related Material


[pdf] [supplementary material] [DOI]