论文标题

人们如何互相谈论:建模广义的群体间偏见和情感

How people talk about each other: Modeling Generalized Intergroup Bias and Emotion

论文作者

Govindarajan, Venkata S, Atwell, Katherine, Sinno, Barea, Alikhani, Malihe, Beaver, David I., Li, Junyi Jessy

论文摘要

当前对NLP偏差的研究主要依赖于对特定人群组的偏见(不需要或负面)偏见。尽管这导致了认识和减轻负面偏见的进展,并且必须对目标群体有明确的概念,但并不总是实际的。在这项工作中,我们推断出源于社会科学和心理学文献的更广泛的偏见概念。我们朝着预测人际关系群体关系(IGR)迈进 - 用语音来建模说话者与目标之间的关系 - 使用细粒度的人际情感作为锚点。我们建立并发布了以人际情感注释的美国国会议员的英语推文数据集 - 同类情绪中的第一个,并为IGR标签“找到了监督”;我们的分析表明,微妙的情绪信号表明了不同的偏见。虽然人类可以表现出色,而不是识别发言的机会,但我们表明神经模型的表现要好得多。此外,IGR和人际交往感情绪之间的共同编码在这两个任务中都可以提高表现。本文的数据和代码可从https://github.com/venkatasg/interpersonal-bias获得

Current studies of bias in NLP rely mainly on identifying (unwanted or negative) bias towards a specific demographic group. While this has led to progress recognizing and mitigating negative bias, and having a clear notion of the targeted group is necessary, it is not always practical. In this work we extrapolate to a broader notion of bias, rooted in social science and psychology literature. We move towards predicting interpersonal group relationship (IGR) - modeling the relationship between the speaker and the target in an utterance - using fine-grained interpersonal emotions as an anchor. We build and release a dataset of English tweets by US Congress members annotated for interpersonal emotion -- the first of its kind, and 'found supervision' for IGR labels; our analyses show that subtle emotional signals are indicative of different biases. While humans can perform better than chance at identifying IGR given an utterance, we show that neural models perform much better; furthermore, a shared encoding between IGR and interpersonal perceived emotion enabled performance gains in both tasks. Data and code for this paper are available at https://github.com/venkatasg/interpersonal-bias

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源