TARAC: Mitigating Hallucination in LVLMs via Temporal Attention Real-time Accumulative Connection

📰 ArXiv cs.AI

arXiv:2504.04099v2 Announce Type: replace-cross Abstract: Large Vision-Language Models have demonstrated remarkable capabilities, yet they suffer from hallucinations that limit practical deployment. While various mitigation strategies exist, they often incur high computational overhead or require extensive retraining. In this paper, we address the issue of visual attention decay during generation, a key factor contributing to hallucinations. We propose Temporal Attention Real-time Accumulative C

Published 14 Apr 2026
Read full paper → ← Back to Reads