论文标题
代码注释与Bert和Longformer的不一致检测
Code Comment Inconsistency Detection with BERT and Longformer
论文作者
论文摘要
评论或源代码的自然语言描述是软件开发人员之间的标准实践。通过传达代码的重要方面,例如功能和用法,评论有助于软件项目维护。但是,当对代码进行修改而没有随附评论的纠正时,就会出现评论和代码之间的不一致性,这为开发人员混淆和错误打开了可能性。在本文中,我们提出了两个基于Bert(Devlin等,2019)和Longformer(Beltagy等,2020)的模型,以在自然语言推论(NLI)上下文中检测此类不一致。通过对代码更改和之后的评论方法对的评估,我们证明了我们的模型的表现优于多个基本线,并产生与排除语言和词汇特征的最先进模型的可比结果。我们进一步讨论了未来研究的想法,以使用预审前的语言模型进行不一致的检测和自动评论更新。
Comments, or natural language descriptions of source code, are standard practice among software developers. By communicating important aspects of the code such as functionality and usage, comments help with software project maintenance. However, when the code is modified without an accompanying correction to the comment, an inconsistency between the comment and code can arise, which opens up the possibility for developer confusion and bugs. In this paper, we propose two models based on BERT (Devlin et al., 2019) and Longformer (Beltagy et al., 2020) to detect such inconsistencies in a natural language inference (NLI) context. Through an evaluation on a previously established corpus of comment-method pairs both during and after code changes, we demonstrate that our models outperform multiple baselines and yield comparable results to the state-of-the-art models that exclude linguistic and lexical features. We further discuss ideas for future research in using pretrained language models for both inconsistency detection and automatic comment updating.