Rethinking Token-Level Credit Assignment in RLVR: A Polarity-Entropy Analysis
📰 ArXiv cs.AI
arXiv:2604.11056v1 Announce Type: cross Abstract: Reinforcement Learning with Verifiable Rewards (RLVR) has substantially improved the reasoning ability of Large Language Models (LLMs). However, its sparse outcome-based rewards pose a fundamental credit assignment problem. We analyze this problem through the joint lens of reward polarity and token entropy. Our diagnostic tool, the Four Quadrant Decomposition, isolates token updates by polarity and entropy, and controlled ablations show that reas
DeepCamp AI