论文标题

校准语言模型用于分布数据和分发数据

Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data

论文作者

Kong, Lingkai, Jiang, Haoming, Zhuang, Yuchen, Lyu, Jie, Zhao, Tuo, Zhang, Chao

论文摘要

由于过度参数化而导致的细胞训练的预训练的语言模型可能会因分布和分布(OOD)数据而严重误解。为了减轻此问题,我们提出了一种正规的微调方法。我们的方法引入了两种类型的正则化以进行更好的校准:(1)跨媒体正则化,该正则化通过数据歧管中的插值生成伪式式样品。使用这些伪样品的增强训练会实现平滑度正则化,以改善分布校准。 (2)Off-Manifold正则化,这鼓励该模型输出伪外样样本的均匀分布,以解决OOD数据的过度信心问题。我们的实验表明,根据预期校准误差,错误分类检测和OOD检测,提出的方法优于现有的校准方法,用于文本分类。我们的代码可以在https://github.com/lingkai-kong/calibrated-bert-fine-tuning上找到。

Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization. To mitigate this issue, we propose a regularized fine-tuning method. Our method introduces two types of regularization for better calibration: (1) On-manifold regularization, which generates pseudo on-manifold samples through interpolation within the data manifold. Augmented training with these pseudo samples imposes a smoothness regularization to improve in-distribution calibration. (2) Off-manifold regularization, which encourages the model to output uniform distributions for pseudo off-manifold samples to address the over-confidence issue for OOD data. Our experiments demonstrate that the proposed method outperforms existing calibration methods for text classification in terms of expectation calibration error, misclassification detection, and OOD detection on six datasets. Our code can be found at https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源