METER: Evaluating Multi-Level Contextual Causal Reasoning in Large Language Models
📰 ArXiv cs.AI
arXiv:2604.11502v1 Announce Type: cross Abstract: Contextual causal reasoning is a critical yet challenging capability for Large Language Models (LLMs). Existing benchmarks, however, often evaluate this skill in fragmented settings, failing to ensure context consistency or cover the full causal hierarchy. To address this, we pioneer METER to systematically benchmark LLMs across all three levels of the causal ladder under a unified context setting. Our extensive evaluation of various LLMs reveals
DeepCamp AI