Analyzing LLM Reasoning to Uncover Mental Health Stigma

📰 ArXiv cs.AI

arXiv:2604.25053v1 Announce Type: cross Abstract: While large language models (LLMs) are increasingly being explored for mental health applications, recent studies reveal that they can exhibit stigma toward individuals with psychological conditions. Existing evaluations of this stigma primarily rely on multiple-choice questions (MCQs), which fail to capture the biases embedded within the models' underlying logic. In this paper, we analyze the intermediate reasoning steps of LLMs to uncover hidde

Published 29 Apr 2026
Read full paper → ← Back to Reads