Neural Re-Rendering of Humans from a Single Image

Kripasindhu Sarkar, Dushyant Mehta, Weipeng Xu, Vladislav Golyanik, Christian Theobalt ;

Abstract


Human re-rendering from a single image is a starkly under-constrained problem and state-of-the-art algorithms often exhibit un-desired artefacts, such as oversmoothing, unrealistic distortions of thebody parts and garments, or implausible changes of the texture. To ad-dress these challenges, we propose a new method for neural re-renderingof a human under a novel user-defined pose and viewpoint given oneinput image. Our algorithm represents body pose and shape as a para-metric mesh which can be reconstructed from a single image and easilyreposed. Instead of a color-based UV texture-map, our approach furtheremploys a learned high-dimensional UV feature-map to encode appear-ance. This rich implicit representation captures detailed appearance vari-ation across poses, viewpoints, person identities and clothing styles bet-ter than learned color texture maps. The body model with the renderedfeature-maps is fed through a neural image-translation network that cre-ates the final rendered color image. The above components are combinedin an end-to-end-trained neural network architecture that takes as in-put a source person image, and images of the parametric body modelin the source pose and desired target pose. Experimental evaluationdemonstrates that our approach produces higher quality single-imagere-rendering results than existing methods."

Related Material


[pdf]