Is It Necessary to Transfer Temporal Knowledge for Domain Adaptive Video Semantic Segmentation?

Xinyi Wu, Zhenyao Wu, Jin Wan, Lili Ju, Song Wang ;

Abstract


"Video semantic segmentation is a fundamental and important task in computer vision, and it usually requires large-scale labeled data for training deep neural network models. To avoid laborious manual labeling, domain adaptive video segmentation approaches were recently introduced by transferring the knowledge from the source domain of self-labeled simulated videos to the target domain of unlabeled real-world videos. However, it leads to an interesting question -- while video-to-video adaptation is a natural idea, does the source data require to be videos? In this paper, we argue that it is not necessary to transfer temporal knowledge since the temporal continuity of video segmentation in the target domain can be estimated and enforced without reference to videos in the source domain. This motivates a new framework of Image-to-Video Domain Adaptive Semantic Segmentation (I2VDA), where the source domain is a set of images without temporal information. Under this setting, we bridge the domain gap via adversarial training based on only the spatial knowledge, and develop a novel temporal augmentation strategy, through which the temporal consistency in the target domain is well-exploited and learned. In addition, we introduce a new training scheme by leveraging a proxy network to produce pseudo-labels on-the-fly, which is very effective to improve the stability of adversarial training. Experimental results on two synthetic-to-real scenarios show that the proposed I2VDA method can achieve even better performance on video semantic segmentation than existing state-of-the-art video-to-video domain adaption approaches."

Related Material


[pdf] [supplementary material] [DOI]