Deep Reinforcement Learning with Iterative Shift for Visual Tracking

Liangliang Ren, Xin Yuan, Jiwen Lu, Ming Yang, Jie Zhou ; The European Conference on Computer Vision (ECCV), 2018, pp. 684-700

Abstract


Visual tracking is confronted by the dilemma to locate a target both}accurately and efficiently, and make decisions online whether and how to adapt the appearance model or even restart tracking. In this paper, we propose a deep reinforcement learning with iterative shift (DRL-IS) method for single object tracking, where an actor-critic network is introduced to predict the iterative shifts of object bounding boxes, and evaluate the shifts to take actions on whether to update object models or re-initialize tracking. Since locating an object is achieved by an iterative shift process, rather than online classification on many sampled locations, the proposed method is robust to cope with large deformations and abrupt motion, and computationally efficient since finding a target takes up to 10 shifts. In offline training, the critic network guides to learn how to make decisions jointly on motion estimation and tracking status in an end-to-end manner. Experimental results on the OTB benchmarks with large deformation improve the tracking precision by 1.7% and runs about 5 times faster than the competing state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Ren_2018_ECCV,
author = {Ren, Liangliang and Yuan, Xin and Lu, Jiwen and Yang, Ming and Zhou, Jie},
title = {Deep Reinforcement Learning with Iterative Shift for Visual Tracking},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}