论文标题
基于学生混合模型的视觉伺服宣誓
Student Mixture Model Based Visual Servoing
论文作者
论文摘要
经典的基于图像的视觉伺服(IBV)利用了几何图像特征,例如点,直线和图像矩来控制机器人系统。这些功能的强大提取和实时跟踪对于IBV的性能至关重要。此外,此类功能可能不适合现实世界应用,在现实世界中,将目标与其他环境区分开并不容易。另外,基于完整的光度数据的方法可以避免特征提取,跟踪和对象检测的要求。在这项工作中,我们提出了一种基于概率模型的方法,该方法使用整个光度数据以进行视觉伺服。已经提出了一种使用学生混合模型(SMM)的新型图像建模方法,该方法基于多元学生的T分布。因此,基于视觉的控制定律被表述为最小二乘最小化问题。对于2D和3D定位任务,证明了提出的框架的功效,显示出良好的误差收敛和可接受的相机轨迹。还进行了数值实验,以显示出对不同图像场景和部分遮挡的鲁棒性。
Classical Image-Based Visual Servoing (IBVS) makes use of geometric image features like point, straight line and image moments to control a robotic system. Robust extraction and real-time tracking of these features are crucial to the performance of the IBVS. Moreover, such features can be unsuitable for real world applications where it might not be easy to distinguish a target from the rest of the environment. Alternatively, an approach based on complete photometric data can avoid the requirement of feature extraction, tracking and object detection. In this work, we propose one such probabilistic model based approach which uses entire photometric data for the purpose of visual servoing. A novel image modelling method has been proposed using Student Mixture Model (SMM), which is based on Multivariate Student's t-Distribution. Consequently, a vision-based control law is formulated as a least squares minimisation problem. Efficacy of the proposed framework is demonstrated for 2D and 3D positioning tasks showing favourable error convergence and acceptable camera trajectories. Numerical experiments are also carried out to show robustness to distinct image scenes and partial occlusion.