Regional Explanations: Bridging Local and Global Variable Importance

📰 ArXiv cs.AI

arXiv:2604.11223v1 Announce Type: cross Abstract: We analyze two widely used local attribution methods, Local Shapley Values and LIME, which aim to quantify the contribution of a feature value $x_i$ to a specific prediction $f(x_1, \dots, x_p)$. Despite their widespread use, we identify fundamental limitations in their ability to reliably detect locally important features, even under ideal conditions with exact computations and independent features. We argue that a sound local attribution method

Published 14 Apr 2026
Read full paper → ← Back to Reads