Multi-Persona Thinking for Bias Mitigation in Large Language Models

📰 ArXiv cs.AI

arXiv:2601.15488v2 Announce Type: replace-cross Abstract: Large Language Models (LLMs) exhibit social biases, which can lead to harmful stereotypes and unfair outcomes. We propose \textbf{Multi-Persona Thinking (MPT)}, a simple inference-time framework that reduces social bias by encouraging reasoning from multiple perspectives. MPT guides the model to consider contrasting social identities, such as male and female, together with a neutral viewpoint. These viewpoints then interact through an ite

Published 17 Apr 2026
Read full paper → ← Back to Reads