LLMs Should Incorporate Explicit Mechanisms for Human Empathy

📰 ArXiv cs.AI

arXiv:2604.10557v1 Announce Type: cross Abstract: This paper argues that Large Language Models (LLMs) should incorporate explicit mechanisms for human empathy. As LLMs become increasingly deployed in high-stakes human-centered settings, their success depends not only on correctness or fluency but on faithful preservation of human perspectives. Yet, current LLMs systematically fail at this requirement: even when well-aligned and policy-compliant, they often attenuate affect, misrepresent contextu

Published 14 Apr 2026
Read full paper → ← Back to Reads