论文标题

主观问题回答:在主观领域中解释变形金刚的内部工作

Subjective Question Answering: Deciphering the inner workings of Transformers in the realm of subjectivity

论文作者

Muttenthaler, Lukas

论文摘要

理解主观性需要超越常识范围的推理技能。它需要机器学习模型来处理情感并执行意见采矿。在这项工作中,我利用了最近发布的数据集来回答跨度选择问题,即Subjqa。 SubJQA是第一个包含问题的质量检查数据集,该问题要求与六个不同域中的审查段落相对应的主观意见。因此,要回答这些主观问题,学习者必须为各种领域提取意见和过程情感,并在相应的问题中与自然语言说明相结合,从而使质量检查任务的难度加剧了。本论文的主要目的是研究基于变压器的体系结构的内部工作(即潜在表示),以更好地理解这些尚未充分理解的“黑盒”模型。与与错误的预测相对应的那些表示形式相比,变压器的隐藏表示形式(关于真实答案跨度)在向量空间中更接近。该观察结果跨越了前三个变压器层,用于客观和主观问题,并且通常是层尺寸的函数增加的。此外,与不正确的答案跨度预测相比,正确的答案跨度令牌在潜在空间中获得高余弦相似性的可能性明显更高。这些结果对下游应用程序具有决定性的含义,在这种情况下,重要的是要了解神经网络为什么会犯错误,而在这一点,时空发生了错误(例如,在不需要标记数据的情况下自动预测答案跨度预测的正确性))。

Understanding subjectivity demands reasoning skills beyond the realm of common knowledge. It requires a machine learning model to process sentiment and to perform opinion mining. In this work, I've exploited a recently released dataset for span-selection Question Answering, namely SubjQA. SubjQA is the first QA dataset that contains questions that ask for subjective opinions corresponding to review paragraphs from six different domains. Hence, to answer these subjective questions, a learner must extract opinions and process sentiment for various domains, and additionally, align the knowledge extracted from a paragraph with the natural language utterances in the corresponding question, which together enhance the difficulty of a QA task. The primary goal of this thesis was to investigate the inner workings (i.e., latent representations) of a Transformer-based architecture to contribute to a better understanding of these not yet well understood "black-box" models. Transformer's hidden representations, concerning the true answer span, are clustered more closely in vector space than those representations corresponding to erroneous predictions. This observation holds across the top three Transformer layers for both objective and subjective questions and generally increases as a function of layer dimensions. Moreover, the probability to achieve a high cosine similarity among hidden representations in latent space concerning the true answer span tokens is significantly higher for correct compared to incorrect answer span predictions. These results have decisive implications for down-stream applications, where it is crucial to know about why a neural network made mistakes, and in which point, in space and time the mistake has happened (e.g., to automatically predict correctness of an answer span prediction without the necessity of labeled data).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源