Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

📰 ArXiv cs.AI

arXiv:2602.04674v2 Announce Type: replace-cross Abstract: Large language models (LLMs) are increasingly used as proxies for human judgment in computational social science, yet their ability to reproduce patterns of susceptibility to misinformation remains unclear. We test whether LLM-simulated survey respondents, prompted with participant profiles drawn from social survey data measuring network, demographic, attitudinal and behavioral features, can reproduce human patterns of misinformation beli

Published 13 Apr 2026
Read full paper → ← Back to Reads