论文标题

探测预先训练的语言模型,以实现跨文化的价值差异

Probing Pre-Trained Language Models for Cross-Cultural Differences in Values

论文作者

Arora, Arnav, Kaffee, Lucie-Aimée, Augenstein, Isabelle

论文摘要

语言嵌入了人们所拥有的有关社会,文化和政治价值观的信息。先前的工作探讨了在预训练的语言模型(PTLMS)中编码的社会和潜在有害偏见。但是,没有系统的研究研究这些模型中嵌入的价值如何在各种文化之间各不相同。在本文中,我们介绍了探针来研究这些模型中跨文化的价值,以及它们是否与现有理论和跨文化价值调查保持一致。我们发现,PTLM捕获了跨文化价值的差异,但是那些仅与已建立的价值调查相一致。我们讨论在跨文化环境中使用错误对准模型的含义,以及将PTLM与价值调查保持一致的方式。

Language embeds information about social, cultural, and political values people hold. Prior work has explored social and potentially harmful biases encoded in Pre-Trained Language models (PTLMs). However, there has been no systematic study investigating how values embedded in these models vary across cultures. In this paper, we introduce probes to study which values across cultures are embedded in these models, and whether they align with existing theories and cross-cultural value surveys. We find that PTLMs capture differences in values across cultures, but those only weakly align with established value surveys. We discuss implications of using mis-aligned models in cross-cultural settings, as well as ways of aligning PTLMs with value surveys.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源