论文标题
关于动作的知识图
All About Knowledge Graphs for Actions
论文作者
论文摘要
当前的行动识别系统需要大量的培训数据来识别行动。最近的作品探索了零射门的范式和很少的学习范式,以学习几乎没有标签的类别或类别的分类器。在对象识别中类似的范式之后,这些方法利用了外部知识来源(例如,来自语言域的知识图)。但是,与对象不同,目前尚不清楚什么是最佳的知识表示。在本文中,我们打算更好地了解知识图(kgs),这些图形可用于零击和几乎没有动作识别。特别是,我们研究了KGS的三种不同的施工机制:动作嵌入,动作对象嵌入,视觉嵌入。我们对不同KGS在不同实验设置中的影响进行了广泛的分析。最后,为了实现对零射门和几次射击方法的系统研究,我们提出了基于UCF101,HMDB51和Charades数据集的改进的评估范例,以从培训的动力学培训的模型中转移知识转移。
Current action recognition systems require large amounts of training data for recognizing an action. Recent works have explored the paradigm of zero-shot and few-shot learning to learn classifiers for unseen categories or categories with few labels. Following similar paradigms in object recognition, these approaches utilize external sources of knowledge (eg. knowledge graphs from language domains). However, unlike objects, it is unclear what is the best knowledge representation for actions. In this paper, we intend to gain a better understanding of knowledge graphs (KGs) that can be utilized for zero-shot and few-shot action recognition. In particular, we study three different construction mechanisms for KGs: action embeddings, action-object embeddings, visual embeddings. We present extensive analysis of the impact of different KGs in different experimental setups. Finally, to enable a systematic study of zero-shot and few-shot approaches, we propose an improved evaluation paradigm based on UCF101, HMDB51, and Charades datasets for knowledge transfer from models trained on Kinetics.