Chrome Extension
WeChat Mini Program
Use on ChatGLM

Integrating Graphs With Large Language Models: Methods and Prospects

IEEE INTELLIGENT SYSTEMS(2024)

Cited 0|Views83
No score
Abstract
Large language models (LLMs) such as Generative Pre-trained Transformer 4 have emerged as frontrunners, showcasing unparalleled prowess in diverse applications including answering queries, code generation, and more. Parallelly, graph-structured data, intrinsic data types, are pervasive in real-world scenarios. Merging the capabilities of LLMs with graph-structured data has been a topic of keen interest. This article bifurcates such integrations into two predominant categories. The first leverages LLMs for graph learning, where LLMs can not only augment existing graph algorithms but also stand as prediction models for various graph tasks. Conversely, the second category underscores the pivotal role of graphs in advancing LLMs. Mirroring human cognition, we solve complex tasks by adopting graphs in either reasoning or collaboration. Integrating with such structures can significantly boost the performance of LLMs in various complicated tasks. We also discuss and propose open questions for integrating LLMs with graph-structured data for the future direction of the field.
More
Translated text
Key words
Merging,Predictive models,Transformers,Prediction algorithms,Cognition,Task analysis,Intelligent systems
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined