Manifold Adversarial Learning for Cross-Domain 3D Shape Representation
"Deep neural networks (DNNs) for point clouds have achieved superior performance on a range of 3D vision tasks. However, generalization to out-of-distribution 3D point clouds remains challenging for DNNs. Due to the expensive cost or even infeasibility of annotating large-scale point clouds, it is critical but yet has not been fully explored to design methods to generalize DNN models to unseen domains of point clouds without any access to them during the training process. In this paper, we propose to learn 3D point cloud representation on a seen source domain and generalize to an unseen target domain via adversarial learning. Specifically, we unify several geometric transformations in a manifold-based framework under which distance between transformations is well-defined. Measured by the distance, adversarial samples are mined to form intermediate domains and retained in an adaptive replay-based memory. We further provide theoretical justification for the intermediate domains to reduce the generalization error of the DNN models. Experimental results on synthetic-to-real datasets illustrate that our method outperforms existing 3D deep learning models for domain generalization."