FaCT: Faithful Concept Traces for Explaining Neural Network Decisions

📰 ArXiv cs.AI

arXiv:2510.25512v2 Announce Type: replace-cross Abstract: Deep networks have shown remarkable performance across a wide range of tasks, yet getting a global concept-level understanding of how they function remains a key challenge. Many post-hoc concept-based approaches have been introduced to understand their workings, yet they are not always faithful to the model. Further, they make restrictive assumptions on the concepts a model learns, such as class-specificity, small spatial extent, or align

Published 15 Apr 2026
Read full paper → ← Back to Reads