Author: Aditi Mallavarapu
Learning Sciences and Technology Postdoctoral Researcher at CIRCLS. Her research projects all have the shared goal of collaborating with practitioners to design and build computational and analytical methods and tools to support and improve exploration-based learning. She has worked professionally as a technical consultant where she developed software solutions for healthcare and financial organizations. As an instructor she is involved with underserved communities to pique their interests in Computer Science.
This blog is the second of the three-part shared series, between NEXUS and the Center for Integrative Research in Computing and Learning Sciences or CIRCLS. The first post described the synergy between the two communities, and introduced the CIRCLS priority around broadening/inclusion in Learning Analytics/AI in education. In this post, we highlight the concerns and the importance of “broadening” participation in research of AI in education, equally raised by both the communities.
The “Fate” of AI education research
Education, like many other fields, has been revolutionized in this era of datafication. The omni-present machines, with the so-called “intelligence,” are being used to improve the way we learn and teach through devices and technologies, and connect learners, teachers, and even families across ecologies (classrooms, museums, homes) to manage learning. Some innovations have started to dominate the way we learn and remember, sometimes even remembering for us. The imaginative artificial technologies enacted in Star Trek with communicators, talking virtual assistants, and video chats have become our reality. But this reality has not been equitably rolled out across individuals, schools, or communities.
As AI technologies become intertwined with our daily lives, there are justifiable concerns in society around algorithmic fairness, accountability, trustworthiness and ethics (“FATE”). Research is developing rapidly to ask how can we, as a community, rethink AI-based technological progress to address this inequity? How can we address the concerns around privacy, trust, and bias, that have become prevalent due to the prolific use of data and recording devices in these AI technologies? Progress in defining the nature of the challenges, and ways forward, is being made in both the Learning Analytics and AIED communities, but there remains much to do.
Researchers have suggested addressing these issues, in part, by broadening community engagement. With the recent transition to online learning due to the COVID-19 pandemic, the need to address these issues has become more urgent.
Addressing the issues by broadening engagement
For over a decade, researchers have been working synergistically across disciplines to address issues around equity, privacy, trust and bias. Some researchers have highlighted, humanizing the issues by engaging all stakeholders, learners, educators, caregivers and domain experts, in contributing to the design of the AI systems. One goal of broadening engagement is to consider the complex dynamics that result from multiple perspectives of the different stakeholders involved in a learning process, while designing the AI system. To fully achieve this, the design process should provide the stakeholders an active and respected role, which is non-trivial. The black box-like opaqueness that many of these AI technologies possess makes it difficult for practitioners to contribute. This should not be an excuse.
One way of providing everyone a platform to voice their opinions is to reduce the opaqueness through enacting and visualizing scenarios, making the design process about the humans involved in conceiving and using the system. Taking such a human-centered approach engages practitioners in conversations around what should be measured, and how that measurement could be used in decisions, with a hopeful view of mitigating at least some unwarranted applications and effects that a researcher alone might not be able to anticipate from where they sit.
Come be a part of the conversation!
We at CIRCLS, have planned the CIRCLS’21 convening for the community with the theme of “Remake Broadening.” Broadening participation for emergent technologies, like AI design, is an important aspect of this initiative. The keynote speakers have vested interests in broadening participation in Computer Science and AI education across different age groups and communities using emergent AI technologies. They have planned to engage the attendees in thinking about “designing for broadening” through “broadening participation in design.”
The community will also be hearing from the researchers at the AI institutes, iSat, AI-ALOE and AIEngage.org (part of the 11 institutes that won the recent NSF “AI institute” competition). This session will highlight how the community of both researchers and practitioners can contribute to and participate in AI research.
We invite SoLAR members to the conversation. Our Expertise Connections sessions (September 13, 4pm Eastern: Equity and Ethics Considerations for AI) and our Strategy sessions (September 14, 3pm Eastern: Remake Broadening) will allow researchers and practitioners alike to survey the emerging landscape and think strategically about how we could remake the envisioned broadening. We’ve designed these sessions to engage participants with the most pressing topics in small group activities — a “low floor and high ceiling” setting for both practitioners and researchers, that encourages the understanding of each others’ perspectives.
We hope this plan will give all attendees the chance to shape the broadening process. Our vision for this convening is a first step to “remake broadening”. With more engagements to follow, we hope to keep the conversation going even after the convening. We hope you’ll join us. You can see details about all the sessions when you register and explore Swapcard for CIRCLS’21.