论文标题
用代表性相似性分析在语言模型中探测语义基础
Probing Semantic Grounding in Language Models of Code with Representational Similarity Analysis
论文作者
论文摘要
代表性相似性分析是一种来自认知神经科学的方法,有助于比较来自两个不同数据源的表示。在本文中,我们建议使用代表性分析来探测代码语言模型中的语义基础。我们通过使用IBM Codenet数据集中的数据来探究Codebert模型中的语义接地。通过我们的实验,我们表明当前的训练方法不会在代码的语言模型中引起语义基础,而是专注于优化基于形式的模式。我们还表明,即使在语义相关任务上进行了一些微调,也会大大增加Codebert的语义基础。我们对Codebert模型的输入方式的消融表明,在单峰输入(仅代码)上使用双峰输入(代码和自然语言)(仅代码)可以在语义微调期间提供更好的语义接地和样本效率。最后,我们对代码中语义扰动的实验表明,Codebert能够坚固地区分语义正确和错误的代码。
Representational Similarity Analysis is a method from cognitive neuroscience, which helps in comparing representations from two different sources of data. In this paper, we propose using Representational Similarity Analysis to probe the semantic grounding in language models of code. We probe representations from the CodeBERT model for semantic grounding by using the data from the IBM CodeNet dataset. Through our experiments, we show that current pre-training methods do not induce semantic grounding in language models of code, and instead focus on optimizing form-based patterns. We also show that even a little amount of fine-tuning on semantically relevant tasks increases the semantic grounding in CodeBERT significantly. Our ablations with the input modality to the CodeBERT model show that using bimodal inputs (code and natural language) over unimodal inputs (only code) gives better semantic grounding and sample efficiency during semantic fine-tuning. Finally, our experiments with semantic perturbations in code reveal that CodeBERT is able to robustly distinguish between semantically correct and incorrect code.