论文标题

源代码变压器的实证研究

Empirical Study of Transformers for Source Code

论文作者

Chirkova, Nadezhda, Troshin, Sergey

论文摘要

最初是为自然语言处理(NLP)开发的,由于源代码和文本之间的格式相似性,变压器现在已广泛用于源代码处理。与自然语言相反,源代码是严格结构化的,即它遵循编程语言的语法。最近的几项工作开发了用于捕获源代码中句法信息的变压器修改。这些作品的缺点是它们没有相互比较并考虑不同的任务。在这项工作中,我们对变压器在不同任务中利用句法信息的能力进行了彻底的实证研究。我们考虑三个任务(代码完成,功能命名和错误修复)和重新实现在统一框架中不同的语法捕获修改。我们表明,变形金刚能够纯粹基于句法信息做出有意义的预测,并强调将句法信息提高提高模型的性能的最佳实践。

Initially developed for natural language processing (NLP), Transformers are now widely used for source code processing, due to the format similarity between source code and text. In contrast to natural language, source code is strictly structured, i.e., it follows the syntax of the programming language. Several recent works develop Transformer modifications for capturing syntactic information in source code. The drawback of these works is that they do not compare to each other and consider different tasks. In this work, we conduct a thorough empirical study of the capabilities of Transformers to utilize syntactic information in different tasks. We consider three tasks (code completion, function naming and bug fixing) and re-implement different syntax-capturing modifications in a unified framework. We show that Transformers are able to make meaningful predictions based purely on syntactic information and underline the best practices of taking the syntactic information into account for improving the performance of the model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源