Secret Stealing Attacks on Local LLM Fine-Tuning through Supply-Chain Model Code Backdoors
📰 ArXiv cs.AI
arXiv:2604.27426v1 Announce Type: cross Abstract: Local fine-tuning datasets routinely contain sensitive secrets such as API keys, personal identifiers, and financial records. Although ''local offline fine-tuning'' is often viewed as a privacy boundary, we reveal that compromised model code is sufficient to steal them. Current passive pretrained-weight poisoning attacks, while effective for natural language, fundamentally fail to capture such sparse high-entropy targets due to their reliance on
DeepCamp AI