How Severe Is Benchmark-Sensitivity in Video Self-Supervised Learning?

Fida Mohammad Thoker, Hazel Doughty, Piyush Bagad, Cees G. M. Snoek ;

Abstract


"Despite the recent success of video self-supervised learning models, there is much still to be understood about their generalization capability. In this paper, we investigate how sensitive video self-supervised learning is to the current conventional benchmark and whether methods generalize beyond the canonical evaluation setting. We do this across four different factors of sensitivity: domain, samples, actions and task. Our study which encompasses over 500 experiments on 7 video datasets, 9 self-supervised methods and 6 video understanding tasks, reveals that current benchmarks in video self-supervised learning are not good indicators of generalization along these sensitivity factors. Further, we find that self-supervised methods considerably lag behind vanilla supervised pre-training, especially when domain shift is large and the amount of available downstream samples are low. From our analysis we distill the SEVERE-benchmark, a subset of our experiments, and discuss its implication for evaluating the generalizability of representations obtained by existing and future self-supervised video learning methods. Code is available at https://github.com/fmthoker/SEVERE-BENCHMARK."

Related Material


[pdf] [supplementary material] [DOI]