Online Robust Non-negative Dictionary Learning for Visual Tracking (ICCV2013')

Naiyan Wang, Jingdong Wang and Dit-Yan Yeung.

Abstract

This paper studies the visual tracking problem in video sequences and presents a novel robust sparse tracker under the particle filter framework. In particular, we propose an online robust non-negative dictionary learning algorithm for updating the object templates so that each learned template can capture a distinctive aspect of the tracked object. Another appealing property of this approach is that it can automatically detect and reject the occlusion and cluttered background in a principled way. In addition, we propose a new particle representation formulation using the Huber loss function. The advantage is that it can yield robust estimation without using trivial templates adopted by previous sparse trackers, leading to faster computation. We also reveal the equivalence between this new formulation and the previous one which uses trivial templates. The proposed tracker is empirically compared with state-of-the-art trackers on some challenging video sequences. Both quantitative and qualitative comparisons show that our proposed tracker is superior and more stable.

[pdf] [Supplemental Material] [Matlab Code][Sample Data] [BibTex]


More data could be found in: http://visual-tracking.net/

Related Project

[PRMF Project Page]

Examples of learned templates