TACENR: Task-Agnostic Contrastive Explanations for Node Representations
📰 ArXiv cs.AI
arXiv:2604.19372v1 Announce Type: cross Abstract: Graph representation learning has achieved notable success in encoding graph-structured data into latent vector spaces, enabling a wide range of downstream tasks. However, these node representations remain opaque and difficult to interpret. Existing explainability methods primarily focus on supervised settings or on explaining individual representation dimensions, leaving a critical gap in explaining the overall structure of node representations.
DeepCamp AI