Are they human? Detecting large language models by probing human memory constraints

📰 ArXiv cs.AI

Researchers propose detecting large language models by probing human memory constraints to validate online behavioral research participants

advanced Published 2 Apr 2026
Action Steps
  1. Identify human memory constraints that can be used to distinguish humans from LLMs
  2. Design probes to test these constraints, such as working memory capacity or long-term memory recall
  3. Implement these probes in online behavioral research studies to detect potential LLM participants
  4. Analyze results to determine the effectiveness of these probes in distinguishing humans from LLMs
Who Needs to Know This

AI researchers and data scientists on a team can benefit from this research to improve the validity of online studies, while machine learning engineers can apply these findings to develop more sophisticated LLMs

Key Insight

💡 Probing human memory constraints can be an effective way to detect large language models and ensure the validity of online behavioral research

Share This
🤖 Can you tell if a study participant is human or a large language model? 🤔 Researchers propose using memory constraints to find out!
Read full paper → ← Back to News