谷歌浏览器插件
订阅小程序
在清言上使用

Linguistic Intelligence in Large Language Models for Telecommunications

IEEE International Conference on Communications(2024)

引用 0|浏览9
暂无评分
摘要
Large Language Models (LLMs) have emerged as a significant advancement in thefield of Natural Language Processing (NLP), demonstrating remarkablecapabilities in language generation and other language-centric tasks. Despitetheir evaluation across a multitude of analytical and reasoning tasks invarious scientific domains, a comprehensive exploration of their knowledge andunderstanding within the realm of natural language tasks in thetelecommunications domain is still needed. This study, therefore, seeks toevaluate the knowledge and understanding capabilities of LLMs within thisdomain. To achieve this, we conduct an exhaustive zero-shot evaluation of fourprominent LLMs-Llama-2, Falcon, Mistral, and Zephyr. These models require fewerresources than ChatGPT, making them suitable for resource-constrainedenvironments. Their performance is compared with state-of-the-art, fine-tunedmodels. To the best of our knowledge, this is the first work to extensivelyevaluate and compare the understanding of LLMs across multiple language-centrictasks in this domain. Our evaluation reveals that zero-shot LLMs can achieveperformance levels comparable to the current state-of-the-art fine-tunedmodels. This indicates that pretraining on extensive text corpora equips LLMswith a degree of specialization, even within the telecommunications domain. Wealso observe that no single LLM consistently outperforms others, and theperformance of different LLMs can fluctuate. Although their performance lagsbehind fine-tuned models, our findings underscore the potential of LLMs as avaluable resource for understanding various aspects of this field that lacklarge annotated data.
更多
查看译文
关键词
Large Language Models,Natural Language Processing,Telecommunications,Zero-Shot Evaluation,Classification,Summarization,Question Answering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要