Exploring the impact of fairness-aware criteria in AutoML

📰 ArXiv cs.AI

arXiv:2604.10224v1 Announce Type: cross Abstract: Machine Learning (ML) systems are increasingly used to support decision-making processes that affect individuals. However, these systems often rely on biased data, which can lead to unfair outcomes against specific groups. With the growing adoption of Automated Machine Learning (AutoML), the risk of intensifying discriminatory behaviours increases, as most frameworks primarily focus on model selection to maximise predictive performance. Previous

Published 14 Apr 2026
Read full paper → ← Back to Reads