by Jonathan Vasquez
Higher education institutions increasingly rely on machine learning algorithms to foster student success, improve class experience, and coordinate operations. Applications include predictive models for academic performance or college admission and degree planning tools. However, a growing body of evidence shows that machine learning algorithms may not serve underprivileged communities well and at times discriminate against some minority populations. This is all the more concerning in education as negative outcomes have long-term implications.
This paper implements a dashboard FairEd that outputs a model card that helps the decision-maker understand a model’s performances along three dimensions: predictive power, fairness, and responsiveness to mitigation strategies. Our tool allows the decision-maker when selecting a machine-learning algorithm to decide (i) whether the model provides accurate predictions; (ii) how the model performs along many fairness dimensions and demographic groups; (iii) whether potentially unfair outcomes can be mitigated without degrading predictive accuracy. We apply our tool to models predicting college student dropout in a Chilean university. First, our tool allows capturing the nuances of the Chilean context where unfairness emerges along income lines. Second, our dashboard highlights the benefit of reporting a model’s fairness performances along a diverse set of metrics to shed light on its potentially discriminatory behavior. Third, we find that measuring the cost of fairness – the loss in predictive power when mitigating unfair outcomes – is an important quantity to report on when doing model selection.