The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows

📰 ArXiv cs.AI

arXiv:2604.14807v1 Announce Type: new Abstract: The rapid integration of large language models (LLMs) into everyday workflows has transformed how individuals perform cognitive tasks such as writing, programming, analysis, and multilingual communication. While prior research has focused on model reliability, hallucination, and user trust calibration, less attention has been given to how LLM usage reshapes users' perceptions of their own capabilities. This paper introduces the LLM fallacy, a cogni

Published 17 Apr 2026
Read full paper → ← Back to Reads