by Pati Ruiz
“For algorithms that have the potential to ruin people’s lives, or sharply reduce their options with regard to their liberty, their livelihood, their finances, we need an ‘FDA for algorithms’ that says, ‘show me evidence that it’s going to work, not just to make you money, but for society.’”- Cathy O’Neil, an academic and the author of Weapons of Math Destruction
The film Coded Bias included the quote above by Cathy O’Neil. In this post, I consider this statement in the context of teaching and learning.
We know that education has significant impacts on people’s lives, from how they learn to what they learn, career trajectories, and beyond. With this power, comes the responsibility to develop data ethics. The field of explainable artificial intelligence (XAI), which has grown rapidly these past few years, includes a set of processes and methods that allow humans to better understand the results and outputs of machine learning algorithms. This helps developers of AI-mediated tools understand how the systems they design work and can help them ensure that they work correctly and are meeting requirements and regulatory standards. This also paves the path for accountability and transparency in AI.
There are several advantages to understanding how AI-enabled systems arrive at specific outputs. For example, XAI can help developers make sure the system is fair and working as expected. In classrooms, this might mean making sure that appropriate student identifiers are used to match students up for group work, that they are clear to the teacher, and that the teacher has the ability to override and train the AI. This plays an important role in allowing those affected by the decisions or results of an AI-enabled system to understand those outcomes and make changes.
Why does Explainable AI matter?
As a recent IBM report explains, “It is crucial for an organization to have a full understanding of the AI decision-making processes with model monitoring and accountability of AI and not to trust them blindly.” They go on to describe explainable AI as one of the key requirements of responsible AI. This means that AI needs to be developed in ways that are fair and auditable so that the designers of AI can be accountable for the algorithms and networks they build.
In schools, this means ensuring that things like teacher evaluations are clear and transparent. The film Coded Bias includes a vignette about a teacher whose contract was not renewed due to an algorithm that was not explainable. That algorithm was not transparent and the teacher did not understand why he had been terminated. Explainable AI would have allowed school administrators to point to exactly what factors contributed to the recommendation to terminate this teacher and in addition, they could then override the AI recommendation if they disagreed.
While it is still controversial whether or not AI needs to be explainable, I think that XAI and the goal of fair and correct AI are essential pieces of the puzzle as explaining decisions made by algorithms can answer the public call of accountability in AI as well as legal “rights to explanation” of artificial intelligence required by policies including the European Union’s General Data Protection Regulation (GDPR).
As a former teacher myself, I wonder about the possibility of bias negatively affecting students’ learning. I seek to learn more and find answers. A more transparent AI system could help ensure student privacy and allow me to trust the system is there to support my teaching rather than harm it.
What considerations are important at your school? Please let us know by tweeting @EducatorCIRCLS and sign up for the CIRCLS newsletter to stay updated on emerging technologies for teaching and learning.