Large Language Models Outperform Humans in Fraud Detection and Resistance to Motivated Investor Pressure
📰 ArXiv cs.AI
arXiv:2604.20652v2 Announce Type: new Abstract: Large language models trained on human feedback may suppress fraud warnings when investors arrive already persuaded of a fraudulent opportunity. We tested this in a preregistered experiment across seven leading LLMs and twelve investment scenarios covering legitimate, high-risk, and objectively fraudulent opportunities, combining 3,360 AI advisory conversations with a 1,201-participant human benchmark. Contrary to predictions, motivated investor fr
DeepCamp AI