top of page
Knowledge Augmented LLMs for Complex Problem Solving
- The survey addresses challenges for LLMs in complex problem-solving, including multi-step reasoning, effective domain knowledge integration, and reliable result verification, framing the problem from cognitive science and computational theory perspectives.
- Chain-of-Thought (CoT) reasoning can be enhanced via data synthesis and self-correction, and benefits from increased CoT paths, where coverage grows nearly log-linearly with the number of samples generated from LLM, improving the likelihood of finding a correct solution.
- Knowledge augmentation addresses LLMs' difficulty in retaining long-tail knowledge, using techniques like RAG (Retrieval-Augmented Generation), GraphRAG, and KAG (Knowledge-Augmented Generation), where knowledge can be retrieved from documents or acquired through human interaction.
- Result verification methodologies include LLM-as-a-judge, symbolic reasoning tools, and experimental validation systems, with symbolic verification using formal methods to ensure correctness and experimental verification involving real-world testing.
- The survey maps challenges and advancements to specific domains, including software engineering, mathematics, data science, and scientific research, highlighting domain-specific complexities and the need for specialized domain knowledge.
- Future research directions emphasize addressing data scarcity, reducing computational costs, improving knowledge representation, and developing more robust evaluation frameworks for complex, open-ended problems, including comparisons with published results, LLM-based evaluators, and empirical experiments.
Source:
bottom of page