Appearance Consensus Driven Self-Supervised Human Mesh Recovery

Jogendra Nath Kundu, Mugalodi Rakesh, Varun Jampani, Rahul Mysore Venkatesh, R. Venkatesh Babu ;

Abstract


We present a self-supervised human mesh recovery framework to infer human pose and shape from monocular images in the absence of any paired supervision. Recent advances have shifted the interest towards directly regressing parameters of a parametric human model by supervising them on large-scale, images with 2D landmark annotations. This limits the generalizability of such approaches to operate on samples from unlabeled wild environments. Acknowledging this we propose a novel appearance consensus driven self-supervised objective. To effectively disentangle the foreground (FG) human we rely on image pairs depicting the same person (consistent FG) in varied pose and background (BG) which are obtained from unlabeled wild videos. The proposed FG appearance consistency objective makes use of a novel, differentiable extit{Color-recovery} module to obtain vertex colors without involving any trainable appearance extraction network; via efficient realization of color-picking and reflectional symmetry. We achieve state-of-the-art results on the standard model-based 3D pose estimation benchmarks at comparable supervision levels. Furthermore, the resulting colored mesh prediction opens up usage of our framework for a variety of appearance-related tasks beyond pose and shape estimation, thus establishing our superior generalizability."

Related Material


[pdf]