Position: Leverage Foundational Models for Black-Box Optimization
arxiv(2024)
摘要
Undeniably, Large Language Models (LLMs) have stirred an extraordinary wave
of innovation in the machine learning research domain, resulting in substantial
impact across diverse fields such as reinforcement learning, robotics, and
computer vision. Their incorporation has been rapid and transformative, marking
a significant paradigm shift in the field of machine learning research.
However, the field of experimental design, grounded on black-box optimization,
has been much less affected by such a paradigm shift, even though integrating
LLMs with optimization presents a unique landscape ripe for exploration. In
this position paper, we frame the field of black-box optimization around
sequence-based foundation models and organize their relationship with previous
literature. We discuss the most promising ways foundational language models can
revolutionize optimization, which include harnessing the vast wealth of
information encapsulated in free-form text to enrich task comprehension,
utilizing highly flexible sequence models such as Transformers to engineer
superior optimization strategies, and enhancing performance prediction over
previously unseen search spaces.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要