Deep Novel View Synthesis from Colored 3D Point Clouds

Zhenbo Song, Wayne Chen, Dylan Campbell, Hongdong Li ;

Abstract


We propose a new deep neural network which takes a colored 3D point cloud of a scene, and directly synthesizes a photo-realistic image from an arbitrary viewpoint. Key contributions of this work include a deep point feature extraction module, an image synthesis module, and an image refinement module. Our PointEncoder network extracts discriminative features from the point cloud that contain both local and global contextual information about the scene. Next, the multi-level point features are aggregated to form multi-layer feature maps. These are fed into our ImageDecoder network in order to generate a synthetic RGB image. Finally, the coarse output of the ImageDecoder network is refined using our RefineNet module, supplying more fine details and suppressing unwanted visual artifacts. To generate virtual camera viewpoints in the scene, we rotate and translate the 3D point cloud in order to synthesize new images from novel perspectives. We conduct numerous experiments on public datasets to validate our method, with respect to the quality of the synthesized views, and outperform state-of-the-art significantly."

Related Material


[pdf]