LangProp: A code optimization framework using Large Language Models applied to driving
arxiv(2024)
摘要
We propose LangProp, a framework for iteratively optimizing code generated by
large language models (LLMs), in both supervised and reinforcement learning
settings. While LLMs can generate sensible coding solutions zero-shot, they are
often sub-optimal. Especially for code generation tasks, it is likely that the
initial code will fail on certain edge cases. LangProp automatically evaluates
the code performance on a dataset of input-output pairs, catches any
exceptions, and feeds the results back to the LLM in the training loop, so that
the LLM can iteratively improve the code it generates. By adopting a metric-
and data-driven training paradigm for this code optimization procedure, one
could easily adapt findings from traditional machine learning techniques such
as imitation learning, DAgger, and reinforcement learning. We show LangProp's
applicability to general domains such as Sudoku and CartPole, as well as
demonstrate the first proof of concept of automated code optimization for
autonomous driving in CARLA. We show that LangProp can generate interpretable
and transparent policies that can be verified and improved in a metric- and
data-driven way. Our code is available at
https://github.com/shuishida/LangProp.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要