论文标题
互动:自然语言推理的生成XAI框架解释
INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations
论文作者
论文摘要
与自然语言处理的XAI旨在产生可读的解释,作为AI决策的证据,以解决解释性和透明度。但是,从HCI的角度来看,当前的方法仅着眼于提供单一的解释,该解释未能说明人类思想和语言的多样性。因此,本文通过提出一个生成的XAI框架来解决此差距(解释并预测与上下文条件变分自动编码器查询)。我们的新框架分为两个步骤提供了解释:(一步)解释和标签预测; (第二步)各种证据生成。我们在基准数据集E-SNLI上对变压器体系结构进行密集实验。我们的方法在第一步中,针对解释生成(BLEU的增长率高达4.7%)的最先进基线模型的竞争性或更好的表现;它还可以在第二步中产生多种不同的解释。
XAI with natural language processing aims to produce human-readable explanations as evidence for AI decision-making, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder). Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation. We conduct intensive experiments with the Transformer architecture on a benchmark dataset, e-SNLI. Our method achieves competitive or better performance against state-of-the-art baseline models on explanation generation (up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step one; it can also generate multiple diverse explanations in step two.