Screen, Cache, and Match: A Training-Free Causality-Consistent Reference Frame Framework for Human Animation

📰 ArXiv cs.AI

arXiv:2601.22160v2 Announce Type: replace-cross Abstract: Human animation aims to generate temporally coherent and visually consistent videos over long sequences, yet modeling long-range dependencies while preserving frame quality remains challenging. Inspired by the human ability to leverage past observations for interpreting ongoing actions, we propose FrameCache, a training-free, causality-consistent reference frame framework. FrameCache explicitly converts historical generation results into

Published 13 Apr 2026
Read full paper → ← Back to Reads