Visual Memorability for Robotic Interestingness via Unsupervised Online Learning

Chen Wang, Wenshan Wang, Yuheng Qiu, Yafei Hu, Sebastian Scherer ;

Abstract


In this paper, we explore the problem of interesting scene prediction for mobile robots. This area is currently underexplored but is crucial for many practical applications such as autonomous exploration and decision making. Inspired by industrial demands, we first propose a novel translation-invariant visual memory for recalling and identifying interesting scenes, then design a three-stage architecture of long-term, short-term, and online learning. This enables our system to learn human-like experience, environmental knowledge, and online adaption, respectively. Our approach achieves much higher accuracy than the state-of-the-art algorithms on challenging robotic interestingness datasets.

Related Material


[pdf]