Interval POMDP Shielding for Imperfect-Perception Agents

📰 ArXiv cs.AI

arXiv:2604.20728v1 Announce Type: new Abstract: Autonomous systems that rely on learned perception can make unsafe decisions when sensor readings are misclassified. We study shielding for this setting: given a proposed action, a shield blocks actions that could violate safety. We consider the common case where system dynamics are known but perception uncertainty must be estimated from finite labeled data. From these data we build confidence intervals for the probabilities of perception outcomes

Published 23 Apr 2026
Read full paper → ← Back to Reads