XFED: Non-Collusive Model Poisoning Attack Against Byzantine-Robust Federated Classifiers
📰 ArXiv cs.AI
arXiv:2604.09489v1 Announce Type: cross Abstract: Model poisoning attacks pose a significant security threat to Federated Learning (FL). Most existing model poisoning attacks rely on collusion, requiring adversarial clients to coordinate by exchanging local benign models and synchronizing the generation of their poisoned updates. However, sustaining such coordination is increasingly impractical in real-world FL deployments, as it effectively requires botnet-like control over many devices. This a
DeepCamp AI