Survey Maps Integration of Graph-Based Representations with Large Language Models Across Applications
A recent survey paper, arXiv:2604.15951v1, explores how graph-based representations can be integrated with Large Language Models to improve AI functionalities. This research classifies current techniques based on their objectives, including reasoning, retrieval, generation, and recommendation. It evaluates integration methods through prompting, augmentation, training, and agent-based strategies. Various graph types analyzed include knowledge graphs, scene graphs, interaction graphs, causal graphs, and dependency graphs. The survey illustrates key applications across sectors like cybersecurity, healthcare, materials science, finance, robotics, and multimodal environments. Its goal is to elucidate suitable scenarios for different graph-LLM integrations while addressing existing gaps in understanding their best applications. The paper also discusses the advantages and drawbacks of several integration techniques, offering a comprehensive overview of design considerations.
Key facts
- Survey paper arXiv:2604.15951v1 announced as new
- Focuses on integrating graph-based representations with Large Language Models (LLMs)
- Aims to enhance reasoning, retrieval, and structured decision-making in AI
- Categorizes methods by purpose: reasoning, retrieval, generation, recommendation
- Analyzes graph modalities: knowledge graphs, scene graphs, interaction graphs, causal graphs, dependency graphs
- Examines integration strategies: prompting, augmentation, training, agent-based use
- Maps applications across cybersecurity, healthcare, materials science, finance, robotics, multimodal environments
- Seeks to clarify when, why, where, and what types of graph-LLM integrations are most appropriate
Entities
Institutions
- arXiv