Latent Structure of Affective Representations in Large Language Models
📰 ArXiv cs.AI
arXiv:2604.07382v2 Announce Type: replace-cross Abstract: The geometric structure of latent representations in large language models (LLMs) is an active area of research, driven in part by its implications for model transparency and AI safety. Existing literature has focused mainly on general geometric and topological properties of the learnt representations, but due to a lack of ground-truth latent geometry, validating the findings of such approaches is challenging. Emotion processing provides
DeepCamp AI