论文标题

校准自然语言处理的结构化输出预测指标

Calibrating Structured Output Predictors for Natural Language Processing

论文作者

Jagannatha, Abhyuday, Yu, Hong

论文摘要

我们解决了对自然语言处理(NLP)应用中感兴趣的产出实体校准预测信心的问题。重要的是,NLP应用程序(例如命名实体识别和回答问题)的预测产生了校准的置信度得分,尤其是如果该系统将部署在医疗保健等安全领域(例如医疗保健)中。但是,这种结构化预测模型的输出空间通常太大,无法直接调整二进制或多级校准方法。在这项研究中,我们为基于神经网络的结构化预测模型提出了一种通用校准方案。我们提出的方法可以与任何二进制类校准方案和神经网络模型一起使用。此外,我们表明我们的校准方法也可以用作不确定性的,特定于实体的解码步骤,以不需要额外的培训成本或数据要求,以提高基础模型的性能。我们表明,我们的方法优于当前的校准技术,用于指定性识别,词性和问题回答。我们还通过在几个任务和基准数据集中的解码步骤中提高了模型的性能。我们的方法也改善了室外测试方案的校准和模型性能。

We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the system is to be deployed in a safety-critical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural-network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering. We also improve our model's performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源