论文标题

评估双峰数据效应代码生成的微调

Evaluating How Fine-tuning on Bimodal Data Effects Code Generation

论文作者

Orlanski, Gabriel, Yang, Seonhye, Healy, Michael

论文摘要

尽管代码生成的语言模型的普及程度有所提高,但在双峰编码论坛上培训如何影响模型代码生成的性能和可靠性仍然未知。因此,我们收集了一个超过220万的Stackoverflow问题的数据集,并带有填充答案。这些微调模型的平均$ PASS@k $改进为54.64%和85.35%(Chen等人,2021年),大部分是基本程序问题(Austin等,2021)任务。这种制度进一步减少了使用语法和运行时错误的生成程序数量。但是,我们发现,在较高的温度下,尽管$ $ pass@k $得分较高,模型的产生可运行程序的能力大大降低,这强调了需要更好地纳入这些数据来减轻这些副作用的方法。可以找到代码https://github.com/gabeorlanski/bimodalcode-generation

Despite the increase in popularity of language models for code generation, it is still unknown how training on bimodal coding forums affects a model's code generation performance and reliability. We, therefore, collect a dataset of over 2.2M StackOverflow questions with answers for finetuning. These fine-tuned models have average $pass@k$ improvements of 54.64% and 85.35% on the HumanEval (Chen et al., 2021) and Mostly Basic Program Problems (Austin et al., 2021) tasks, respectively. This regime further decreases the number of generated programs with both syntax and runtime errors. However, we find that at higher temperatures, there are significant decreases to the model's ability to generate runnable programs despite higher $pass@k$ scores, underscoring the need for better methods of incorporating such data that mitigate these side effects. The code can be found https://github.com/gabeorlanski/bimodalcode-generation

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源