Learn2Augment: Learning to Composite Videos for Data Augmentation in Action Recognition

Shreyank N Gowda, Marcus Rohrbach, Frank Keller, Laura Sevilla-Lara ;

Abstract


"We address the problem of data augmentation for video action recognition. Standard augmentation strategies in video are hand designed and sample the space of possible augmented data points either at random, without knowing which augmented points will be better, or through heuristics. We propose to learn what makes a “good” video for action recognition and select only high-quality samples for augmentation. In particular, we choose video compositing of a foreground and a background video as the data augmentation process, which results in diverse and realistic new samples. We learn which pairs of videos to augment without having to actually composite them. This reduces the space of possible augmentations, which has two advantages: it saves computational cost and increases the accuracy of the final trained classifier, as the augmented pairs are of higher quality than average. We present experimental results on the entire spectrum of training settings: few-shot, semi-supervised and fully supervised. We observe consistent improvements across all of them over prior work and baselines on Kinetics, UCF101, HMDB51, and achieve a new state-of-the-art on settings with limited data. We see improvements of up to 8.6% in the semi-supervised setting."

Related Material


[pdf] [supplementary material] [DOI]