论文标题
参考分辨率和上下文变化多模式位置对话,以探索数据可视化
Reference Resolution and Context Change in Multimodal Situated Dialogue for Exploring Data Visualizations
论文作者
论文摘要
参考分辨率旨在识别说话者所引用的实体,在现实世界中更为复杂:新的指称人可以通过代理参与和/或仅是因为它们属于共享的物理设置而创建的。我们的重点是在多模式对话中解决对大屏幕显示上的可视化的引用;至关重要的是,参考分辨率直接参与创建新的可视化的过程。我们描述了通过语言和手势以及新的实体建立在大屏幕上出现的可视化的用户引用的注释,这是由于执行用户请求创建新可视化而产生的。我们还描述了我们的参考分辨率管道,该管道依赖于信息状态体系结构来维护对话环境。我们报告有关检测和解决参考的结果,模型上下文信息的有效性以及创建可视化的请求不足。我们还尝试了常规的CRF和深度学习 /变压器模型(Bilstm-CRF和Bert-CRF),以在用户话语文本中标记参考。我们的结果表明,尽管CRF仍然表现出色,但转移学习显着提高了深度学习方法的性能,这表明传统方法可能会更好地推广到低资源数据。
Reference resolution, which aims to identify entities being referred to by a speaker, is more complex in real world settings: new referents may be created by processes the agents engage in and/or be salient only because they belong to the shared physical setting. Our focus is on resolving references to visualizations on a large screen display in multimodal dialogue; crucially, reference resolution is directly involved in the process of creating new visualizations. We describe our annotations for user references to visualizations appearing on a large screen via language and hand gesture and also new entity establishment, which results from executing the user request to create a new visualization. We also describe our reference resolution pipeline which relies on an information-state architecture to maintain dialogue context. We report results on detecting and resolving references, effectiveness of contextual information on the model, and under-specified requests for creating visualizations. We also experiment with conventional CRF and deep learning / transformer models (BiLSTM-CRF and BERT-CRF) for tagging references in user utterance text. Our results show that transfer learning significantly boost performance of the deep learning methods, although CRF still out-performs them, suggesting that conventional methods may generalize better for low resource data.