“Algorithmic justice––making sure there’s oversight in the age of automation––is one of the largest civil rights concerns we have.”Joy Buolamwini
On May 3rd, 2021 Educator CIRCLS hosted a watch party for the film Coded Bias which highlights the incredible work being done by organizations, data scientists, and activists on an international scale. The film challenged our unconscious biases and encouraged us to listen to one another as we consider the ways that we interact with artificial intelligence (AI) on a daily basis. To begin with, the film made very clear the wide societal impacts, both positive and negative, of AI as well as the fact that AI algorithms can perpetuate biases. Given this, we believe it is essential to become more knowledgeable about AI so that we, as educators, can make informed decisions about AI. As we watched this film we considered and discussed the ethical implications that need to be fully investigated before new AI tools are adopted in our classrooms. This film also helped us see that we also need to investigate the people designing the AI and helped us arrive at some important questions that we need to be asking about AI.
Here are some questions:
- How was the AI system designed, for classroom use or other situations? At what point are teachers brought in to make decisions about their students?
- What data was used when the system was trained?
- What groups of people were included during the testing process?
- What data will be collected by the system and what will happen to that data if the tool is sold? Will it only be used for only the purpose specified? Are there any potential dangers to the students? Are there any potential dangers to the teachers who use the systems with their students?
- Can students be identified from this data?
- Can teachers be identified from this data?
- Can this data be used to evaluate teachers’ performance (something that may not be specified by the system)?
- How does the system interact with students, and can I give feedback to the system or override the decisions?
Another very important question but a difficult one to answer is: When this AI tool fails, how does it fail, and what are the consequences? While EdTech designers might not be able to accurately answer this question, you might be able to use it to start a conversation about the pitfalls of this particular piece of technology. It will also challenge EdTech designers to think about these difficult questions and engage the design process to adjust their product if needed. After all, starting these conversations about the ethics of AI and where its faults lie is our duty.