Large language models eroding science understanding: an experimental study

📰 ArXiv cs.AI

arXiv:2604.25639v1 Announce Type: cross Abstract: This paper is under review in AI and Ethics This study examines whether large language models (LLMs) can reliably answer scientific questions and demonstrates how easily they can be influenced by fringe scientific material. The authors modified custom LLMs to prioritise knowledge in selected fringe papers on the Fine Structure Constant and Gravitational Waves, then compared their responses with those of domain experts and standard LLMs. The alter

Published 29 Apr 2026
Read full paper → ← Back to Reads