An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms

📰 ArXiv cs.AI

arXiv:2603.29466v1 Announce Type: cross Abstract: Existing methods for quantifying predictive uncertainty in neural networks are either computationally intractable for large language models or require access to training data that is typically unavailable. We derive a lightweight alternative through two approximations: a first-order Taylor expansion that expresses uncertainty in terms of the gradient of the prediction and the parameter covariance, and an isotropy assumption on the parameter covar

Published 1 Apr 2026
Read full paper → ← Back to News