论文标题
超级rpca:接近的最大correntropy Criterion和Laplacian尺度混合物建模,用于移动对象检测
Hyper RPCA: Joint Maximum Correntropy Criterion and Laplacian Scale Mixture Modeling On-the-Fly for Moving Object Detection
论文作者
论文摘要
移动对象检测对于许多与视觉相关的任务中的自动视频分析至关重要,例如监视跟踪,视频压缩编码等。作为最流行的对象建模方法之一,稳健的主组件分析(RPCA)旨在将暂时变化(即,即移动)在视频中占据静态的镜头的偏置对象,以使fireve specting specting spracking specting specting specting specting spracking sprake spection sprake spect from specters from spect from spects from spection。 Classic RPCA使用L1-norm施加了前景成分的稀疏性,并通过2-Norm最大程度地减少了建模误差。我们表明,这种假设在实践中可能过于限制,这限制了经典RPCA的有效性,尤其是在处理具有动态背景的视频,摄像头抖动,伪装的移动对象等时,我们提出了一种基于RPCA的新型模型,称为Hyper RPCA,以检测移动对象。与经典的RPCA不同,拟议的Hyper RPCA联合使用最大Correntropy Criterion(MCC)来实现建模误差,而前景对象的Laplacian尺度混合物(LSM)模型。已经进行了广泛的实验,结果表明,所提出的Hyper RPCA具有竞争性能,可在几个众所周知的基准数据集中对最先进的算法进行前景检测。
Moving object detection is critical for automated video analysis in many vision-related tasks, such as surveillance tracking, video compression coding, etc. Robust Principal Component Analysis (RPCA), as one of the most popular moving object modelling methods, aims to separate the temporally varying (i.e., moving) foreground objects from the static background in video, assuming the background frames to be low-rank while the foreground to be spatially sparse. Classic RPCA imposes sparsity of the foreground component using l1-norm, and minimizes the modeling error via 2-norm. We show that such assumptions can be too restrictive in practice, which limits the effectiveness of the classic RPCA, especially when processing videos with dynamic background, camera jitter, camouflaged moving object, etc. In this paper, we propose a novel RPCA-based model, called Hyper RPCA, to detect moving objects on the fly. Different from classic RPCA, the proposed Hyper RPCA jointly applies the maximum correntropy criterion (MCC) for the modeling error, and Laplacian scale mixture (LSM) model for foreground objects. Extensive experiments have been conducted, and the results demonstrate that the proposed Hyper RPCA has competitive performance for foreground detection to the state-of-the-art algorithms on several well-known benchmark datasets.