论文标题

用户是否会与AI助手一起编写更多不安全的代码?

Do Users Write More Insecure Code with AI Assistants?

论文作者

Perry, Neil, Srivastava, Megha, Kumar, Deepak, Boneh, Dan

论文摘要

我们进行了第一个大规模用户研究,研究了用户如何与AI代码助手进行交互,以解决不同编程语言的各种安全性相关任务。总体而言,我们发现,基于OpenAI的Codex-Davinci-002模型可以访问AI助理的参与者比无访问的参与者的安全代码要少得多。此外,与没有AI助理的参与者更有可能相信他们编写安全的代码。此外,我们发现,少信任AI并更多地参与其提示的语言和格式的参与者(例如,重新调整,调整温度)为代码提供了更少的安全性漏洞。最后,为了更好地告知未来基于AI的代码助手的设计,我们对参与者的语言和互动行为进行了深入的分析,并将我们的用户界面作为未来进行类似研究的工具发布。

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源