Audio-Visual Event Localization in Unconstrained Videos

Yapeng Tian, Jing Shi, Bochen Li, Zhiyao Duan, Chenliang Xu; The European Conference on Computer Vision (ECCV), 2018, pp. 247-263

Abstract


In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.

Related Material


[pdf]
[bibtex]
@InProceedings{Tian_2018_ECCV,
author = {Tian, Yapeng and Shi, Jing and Li, Bochen and Duan, Zhiyao and Xu, Chenliang},
title = {Audio-Visual Event Localization in Unconstrained Videos},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}