PrivacyReasoner: Can LLM Emulate a Human-like Privacy Mind?

📰 ArXiv cs.AI

arXiv:2601.09152v2 Announce Type: replace Abstract: Prior work on LLM-based privacy focuses on norm judgment over synthetic vignettes, rather than how people think about a specific data practice and formulate their opinions. We address this gap by designing PrivacyReasoner, an agent architecture grounded in three key ideas: (1) LLMs can detect subtle privacy cues in natural language and role-play human characteristics; (2) a user's ``privacy mind'' can be reconstructed from their real-world onli

Published 15 Apr 2026
Read full paper → ← Back to Reads