Novel View Synthesis on Unpaired Data by Conditional Deformable Variational Auto-Encoder

Mingyu Yin, Li Sun, Qingli Li ;

Abstract


Novel view synthesis often requires to have the paired data from both the source and target views. This paper proposes a view translation model within cVAE-GAN framework for the purpose of unpaired training. We design a conditional deformable module (CDM) which uses the source (or target) view condition vectors as the filters to convolve the feature maps from the main branch. It generates several pairs of displacement maps like the 2D optical flows. The flows then deform the features, and the results are given to the main branch of the encoder (or decoder) by the deformed feature based normalization module (DFNM). DFNM scales and offsets the feature maps in the main branch given its deformed input from the side branch. With the CDM and DFNM, the encoder outputs a view-irrelevant posterior, while the decoder takes the sample drawn from it to synthesize the reconstructed and the view-translated images. To further ensure the disentanglement between the views and other factors, we add adversarial training on the code drawn from the view-irrelevant posterior. The results and the ablation study on MultiPIE and 3D chair datasets validate the effectiveness of the whole framework and the designed module."

Related Material


[pdf]