Video Re-localization

Yang Feng, Lin Ma, Wei Liu, Tong Zhang, Jiebo Luo; The European Conference on Computer Vision (ECCV), 2018, pp. 51-66

Abstract


Many methods have been developed to help people find the video content they want efficiently. However, there are still some unsolved problems in this area. For example, given a query video and a reference video, how to accurately localize a segment in the reference video such that the segment semantically corresponds to the query video? We define a distinctively new task, namely extbf{video re-localization}, to address this need. Video re-localization is an important enabling technology with many applications, such as fast seeking in videos, video copy detection, as well as video surveillance. Meanwhile, it is also a challenging research task because the visual appearance of a semantic concept in videos can have large variations. The first hurdle to clear for the video re-localization task is the lack of existing datasets. It is labor expensive to collect pairs of videos with semantic coherence or correspondence, and label the corresponding segments. We first exploit and reorganize the videos in ActivityNet to form a new dataset for video re-localization research, which consists of about 10,000 videos of diverse visual appearances associated with the localized boundary information. Subsequently, we propose an innovative cross gated bilinear matching model such that every time-step in the reference video is matched against the attentively weighted query video. Consequently, the prediction of the starting and ending time is formulated as a classification problem based on the matching results. Extensive experimental results show that the proposed method outperforms the baseline methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Feng_2018_ECCV,
author = {Feng, Yang and Ma, Lin and Liu, Wei and Zhang, Tong and Luo, Jiebo},
title = {Video Re-localization},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}