Beyond A Fixed Seal: Adaptive Stealing Watermark in Large Language Models

📰 ArXiv cs.AI

arXiv:2604.10893v1 Announce Type: cross Abstract: Watermarking provides a critical safeguard for large language model (LLM) services by facilitating the detection of LLM-generated text. Correspondingly, stealing watermark algorithms (SWAs) derive watermark information from watermarked texts generated by victim LLMs to craft highly targeted adversarial attacks, which compromise the reliability of watermarks. Existing SWAs rely on fixed strategies, overlooking the non-uniform distribution of stole

Published 14 Apr 2026
Read full paper → ← Back to Reads