CIRCLS features relevant primers found in the literature. We welcome new primers on similar topics, but written more specifically to address the needs of the RETTL community. Have a primer to recommend? Contact CIRCLS.
Title: Algorithmic Fairness in Education
Authors: René F. Kizilcec and Hansol Lee
Data-driven predictive models are increasingly used in education to support students, instructors, and administrators. However, there are concerns about the fairness of the predictions and uses of these algorithmic systems. In this introduction to algorithmic fairness in education, we draw parallels to prior literature on educational access, bias, and discrimination, and we examine core components of algorithmic systems (measurement, model learning, and action) to identify sources of bias and discrimination in the process of developing and deploying these systems. Statistical, similarity-based, and causal notions of fairness are reviewed and contrasted in the way they apply in educational contexts. Recommendations for policy makers and developers of educational technology offer guidance for how to promote algorithmic fairness in education.
Kizilcec, R. F., Lee, H. (Forthcoming). Algorithmic Fairness in Education In W. Holmes & K. Porayska-Pomsta (Eds.), Ethics in Artificial Intelligence in Education, Taylor & Francis.