Causality Laundering: Denial-Feedback Leakage in Tool-Calling LLM Agents

📰 ArXiv cs.AI

arXiv:2604.04035v1 Announce Type: cross Abstract: Tool-calling LLM agents can read private data, invoke external services, and trigger real-world actions, creating a security problem at the point of tool execution. We identify a denial-feedback leakage pattern, which we term causality laundering, in which an adversary probes a protected action, learns from the denial outcome, and exfiltrates the inferred information through a later seemingly benign tool call. This attack is not captured by flat

Published 7 Apr 2026
Read full paper → ← Back to News