top of page

Memory in AI: Taxonomy, Operations, and Challenges

  • The paper introduces a unified taxonomy of memory in LLM-based agents, categorizing it into parametric, contextual-structured, and contextual-unstructured* types, addressing the lack of a cohesive view in existing surveys.
  • It defines six core memory operations: consolidation, indexing, updating, forgetting, retrieval, and compression*, classifying them into memory management (consolidation, indexing, updating, forgetting) and memory utilization (retrieval, compression).
  • The study maps these operations to research areas like long-term memory, long-context memory, parametric memory modification, and multi-source memory*, analyzing over 30,000 top-tier papers (2022-2025) using the Relative Citation Index (RCI).
  • Long-term memory* is explored in terms of management (consolidation via summarization, indexing via graph-based approaches, updating via selective editing, and forgetting via time-based decay) and utilization (retrieval based on query, memory, or event-centered approaches, integration via static or dynamic methods, and grounded generation for self-reflective reasoning).
  • Long-context memory* focuses on parametric efficiency (KV cache optimization through dropping, storing optimization, and selection) and contextual utilization (context retrieval via graph-based, token-level, or fragment-level methods, and context compression via soft or hard prompt compression).
  • The paper identifies open challenges including spatio-temporal memory, parametric memory retrieval, lifelong learning, brain-inspired memory models, unified memory representation, multi-agent memory, and memory threats & safety*.
Source:
bottom of page