LifeAlign: Lifelong Alignment for Large Language Models with Memory-Augmented Focalized Preference Optimization

📰 ArXiv cs.AI

arXiv:2509.17183v2 Announce Type: replace-cross Abstract: Alignment plays a crucial role in Large Language Models (LLMs) in aligning with human preferences on a specific task/domain. Traditional alignment methods suffer from catastrophic forgetting, where models lose previously acquired knowledge when adapting to new preferences or domains. We introduce LifeAlign, a novel framework for lifelong alignment that enables LLMs to maintain consistent human preference alignment across sequential learni

Published 8 Apr 2026
Read full paper → ← Back to News