Generative View-Correlation Adaptation for Semi-Supervised Multi-View Learning

Yunyu Liu, Lichen Wang, Yue Bai, Can Qin, Zhengming Ding, Yun Fu ;

Abstract


Multi-view learning (MVL) explores the data extracted from multiple resources. It assumes that the complementary information between different views could be revealed to further improve the learning performance. There are two challenges. First, it is difficult to effectively combine the different view data together while still fully preserve the view-specific information. Second, multi-view datasets are usually small. This situation is easily cause overfitting for general model. To address the challenges, we propose a novel View-Correlation Adaptation ( extit{VCA}) framework in semi-supervised fashion. A semi-supervised data augmentation method is designed to generate extra features and labels based on labeled and even unlabeled samples. In addition, a cross-view adversarial training strategy is proposed to explore the structural information from one view and help the representation learning of the other view. Moreover, a simple yet effective fusion network is proposed for the late fusion stage. In our model, all networks are jointly trained in an end-to-end fashion. Extensive experiments demonstrate that our approach is effective and stable compared with other state-of-the-art methods."

Related Material


[pdf]