ARTFEED — Contemporary Art Intelligence

ARPM: A Temporal Memory Framework for LLM Persona Consistency

ai-technology · 2026-05-16

A new framework called ARPM addresses persona drift and fact loss in large language models during long-term interactions. ARPM separates static knowledge from dynamic dialogue memory, using vector retrieval, BM25, RRF fusion, dual-temporal reranking, and chronological evidence reading. It treats continuity as a governance problem rather than encoding it into model weights. Experiments include a 50-round QA setting comparing signal-to-noise ratios of 1:5 and 1:200+.

Key facts

  • ARPM is an external temporal memory governance framework for long-term dialogue.
  • It separates static knowledge memory from dynamic dialogue experience memory.
  • Techniques include vector retrieval, BM25, RRF fusion, dual-temporal reranking, and chronological evidence reading.
  • ARPM treats continuity as a traceable, auditable, and transferable governance problem.
  • Experiments include a 50-round question-answering setting.
  • Signal-to-noise ratios of 1:5 and 1:200+ are compared.
  • The framework aims to address fact loss, timeline confusion, persona drift, and reduced stability.
  • It is designed for high-noise knowledge bases, context clearing, and cross-model transfer.

Entities

Institutions

  • arXiv

Sources