Greg Chung

Greg Chung is the Associate Director of Technology and Research Innovation at the University of California, Los Angeles / National Center for Research on Evaluation, Standards, and Student Testing (CRESST). His current work focuses on the impact of various learning technologies, such as games and intelligent tutoring systems on learning and engagement outcomes, the design of telemetry systems for games, and analytical approaches to support the modeling of learning outcomes from fine-grained data.

Research interests: Indicator development in interactive systems, physical interactives, validity, and validation

Fun fact: He used to be an engineer writing software to do payload testing of NASA and NRO satellites. Today, instead of testing satellites he tests kids.

What makes you wake up every morning and want to work on emerging technologies for teaching and learning?

The thing that wakes me up every morning is the idea conveyed by Kamerlingh Onnes phrase coined in 1882: “Through measurement to knowledge.” Onne’s phrase succinctly conveys the critical role of measurement in science. I like working on problems that are focused on measuring how people learn, particularly in interactive environments. Human learning is fascinating because it is so varied and systematic at the same time, yet our tools and methods for measuring learning seem rooted in an earlier century. With emerging technologies—and I would say only with emerging technologies—we are given license to imagine, conceive, and create new ways of measuring learning—not to create better and faster standardized tests, but rather to make observable the processes and states of learning that are currently unobservable. For example, imagine if we could automatically sense that a learner is having trouble programming in Scratch, and we could pinpoint the section of code that they are having problems with, and we could characterize the quality of their debugging strategies. If such information could be reliably obtained— to the same degree that you would trust a thermometer reading—wouldn’t that create new opportunities for studying learning of programming, and also new applications involving real-time feedback, make feasible online assessments of programming skills and help realize the promise of individualized learning. Of course, much work remains to be done, which is what keeps me up at night.

What is your most recent insight from your project on emerging technologies for teaching and learning? What are 1-2 challenges you are facing?

I recently had an opportunity to observe closely a first grader playing a block-based programming game. I noticed that when she got stuck, she would engage in classic problem-solving behavior—observe the gameplay, update code, rerun code, and repeat. Her moment-to-moment behavior was systematic and seemed to reflect cognitive strategies. Yet while I can see and interpret that single child’s programming behavior, I see the major challenges as (a) developing methods and tools to detect learning behavior that faithfully reflects cognitive processes (via algorithms that operate on their game interaction data), and (b) ensuring that the algorithms and evidence gathered are traceable and inspectable—so that one can review, critique, and agree or disagree with the interpretation of an indicator. That is, to make transparent the process of going from ‘clicks to constructs.’ One long-term practical outcome if we can address these challenges, is to create methods that enable researchers whose focus is learning—the subject matter experts in our work—to be able to express what learning process they are looking for in an interactive system (e.g., game or simulation) in a way that can be transformed into “into code.” Such capability will presumably improve the quality of indicators (i.e., indicators that are representative of important learning processes vs. indicators that are only predictive learning outcomes but have little substantive meaning or explanatory value), move us toward making indicator development more transparent, provide learning scientists new capability to study interactive systems, and increase standardization of indicators across interactive systems.

What would you like policymakers (e.g., Congress) to know about your work? What would you recommend they understand about technology in learning and teaching?

I think what is important about our work is that we are attempting to improve the quality of the measurement of learning in interactive systems. The work is not sexy and there is no crowd-pleasing demo; rather, the work is fundamental to producing better inferences about what, how, and why learners are doing and performing the way they are in an interactive task. Regardless of whether the interaction data comes from a game, simulation, or even a hands-on task, any improvement in measurement will lead to higher-quality inferences about the learner. I think it is important for policymakers to be aware of the chain of processing and to realize that inferences generated from platforms are based on some input, and the quality of the inferences is influenced a lot by the quality of the inputs. As learning occurs increasingly from digital platforms, there will be much more opportunities to observe students. Platform providers will leverage this observational capability for various purposes, and claims will be made about the platform’s capability to detect and report on students’ learning and achievement. The trustworthiness of the claims is based on the quality of inputs—the indicators or variables—to the statistical model. Focusing on the model inputs presumably will result in higher quality model outputs (or inferences) about the student.

If I had to make one recommendation, I would urge the CIRCLS community—with its unique mix of learning and measurement scientists—to help practitioners, policymakers, developers, and other researchers look at educational technologies through the proverbial magnifying lens: To examine closely, question, and even challenge–the relation between the claims made about the effectiveness of an educational technology on learning, and the specific mechanisms used by the educational technology to effect such learning.