Safe-Support Q-Learning: Learning without Unsafe Exploration
📰 ArXiv cs.AI
arXiv:2604.25379v1 Announce Type: cross Abstract: Ensuring safety during reinforcement learning (RL) training is critical in real-world applications where unsafe exploration can lead to devastating outcomes. While most safe RL methods mitigate risk through constraints or penalization, they still allow exploration of unsafe states during training. In this work, we adopt a stricter safety requirement that eliminates unsafe state visitation during training. To achieve this goal, we propose a Q-lear
DeepCamp AI