AFSS: Artifact-Focused Self-Synthesis for Mitigating Bias in Audio Deepfake Detection

📰 ArXiv cs.AI

AFSS mitigates bias in audio deepfake detection by generating pseudo-fake samples from real audio via self-conversion and self-reconstruction

advanced Published 31 Mar 2026
Action Steps
  1. Generate pseudo-fake samples from real audio using self-conversion
  2. Use self-reconstruction to further refine the generated pseudo-fake samples
  3. Train audio deepfake detectors on the generated pseudo-fake samples to mitigate bias
  4. Evaluate the performance of the detectors on unseen datasets to assess the effectiveness of AFSS
Who Needs to Know This

AI engineers and researchers working on audio deepfake detection can benefit from AFSS to improve the generalization of their models across unseen datasets. This can also be useful for data scientists working on fairness and bias mitigation in machine learning models

Key Insight

💡 Generating pseudo-fake samples from real audio can help mitigate bias in audio deepfake detection

Share This
🔊 Mitigate bias in audio deepfake detection with AFSS! 🤖
Read full paper → ← Back to News