谷歌浏览器插件
订阅小程序
在清言上使用

User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over 6G Wireless Communications

2024 IEEE 99th Vehicular Technology Conference (VTC2024-Spring)(2024)

引用 0|浏览17
暂无评分
摘要
In the rapidly evolving landscape of large language models (LLMs) and mobileedge computing for 6G, the need for efficient service delivery to mobile userswith constrained computational resources has become paramount. Addressing this,our paper delves into a collaborative framework for model training where userdata and model adapters are shared with servers to optimize performance. Withinthis framework, users initially update the first several layers of the adapterswhile freezing the other layers of them, leveraging their local datasets. Oncethis step is complete, these partially trained parameters are transmitted toservers. The servers, equipped with more robust computational capabilities,then update the subsequent layers. After this training, they send the enhancedparameters back to the users. This collaborative training approach ensures thatmobile users with limited computational capacities can still benefit fromadvanced LLM services without being burdened by exhaustive computations.Central to our methodology is the DASHF algorithm, which encapsulates theDinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR),the Hungarian method, and a pioneering fractional programming technique from arecent IEEE JSAC paper [1]. The crux of DASHF is its capability to reformulatean optimization problem as Quadratically Constrained Quadratic Programming(QCQP) via meticulously crafted transformations, making it solvable by SDR andthe Hungarian algorithm. Through extensive simulations, we demonstrate theeffectiveness of the DASHF algorithm, offering significant insights for theadvancement of collaborative LLM service deployments.
更多
查看译文
关键词
6G,Large language model,mobile edge computing,wireless communications,resource allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要