论文标题

利用预训练的语言模型来简化自然语言互动以进行自我追踪

Leveraging Pre-Trained Language Models to Streamline Natural Language Interaction for Self-Tracking

论文作者

Kim, Young-Ho, Kim, Sungdong, Chang, Minsuk, Lee, Sang-Woo

论文摘要

当前用于自我跟踪工具的自然语言互动在很大程度上取决于针对特定跟踪主题和数据格式优化的定制实现,该实现既不可推广,也不可扩展到自我跟踪的巨大设计空间。但是,由于跟踪主题和数据格式各种各样,在自我追踪的背景下,训练机器学习模型具有挑战性。在本文中,我们提出了一项新的NLP任务,用于自我跟踪,从描述为纯文本的回顾性活动日志中提取封闭式和开放式的信息,以及一个域 - 无关,基于GPT-3的NLU框架,可执行此任务。该框架使用合成样本来增加提示,以将任务转换为10次学习,以解决引导新的跟踪主题中的冷启动问题。我们的初步评估表明,我们的方法明显优于基线质量检查模型。进一步,我们讨论了NLP和HCI研究人员可以合作的未来应用领域。

Current natural language interaction for self-tracking tools largely depends on bespoke implementation optimized for a specific tracking theme and data format, which is neither generalizable nor scalable to a tremendous design space of self-tracking. However, training machine learning models in the context of self-tracking is challenging due to the wide variety of tracking topics and data formats. In this paper, we propose a novel NLP task for self-tracking that extracts close- and open-ended information from a retrospective activity log described as a plain text, and a domain-agnostic, GPT-3-based NLU framework that performs this task. The framework augments the prompt using synthetic samples to transform the task into 10-shot learning, to address a cold-start problem in bootstrapping a new tracking topic. Our preliminary evaluation suggests that our approach significantly outperforms the baseline QA models. Going further, we discuss future application domains toward which the NLP and HCI researchers can collaborate.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源