Automated Unit Test Improvement using Large Language Models at Meta

Nadia Alshahwan, Jubin Chheda, Anastasia Finegenova, Beliz Gokkaya,Mark Harman, Inna Harper,Alexandru Marginean, Shubho Sengupta, Eddy Wang

CoRR(2024)

引用 0|浏览11
暂无评分
摘要
This paper describes Meta's TestGen-LLM tool, which uses LLMs to automatically improve existing human-written tests. TestGen-LLM verifies that its generated test classes successfully clear a set of filters that assure measurable improvement over the original test suite, thereby eliminating problems due to LLM hallucination. We describe the deployment of TestGen-LLM at Meta test-a-thons for the Instagram and Facebook platforms. In an evaluation on Reels and Stories products for Instagram, 75 correctly, 57 Instagram and Facebook test-a-thons, it improved 11.5 it was applied, with 73 deployment by Meta software engineers. We believe this is the first report on industrial scale deployment of LLM-generated code backed by such assurances of code improvement.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要