NSF logo

Collaborative Research: Development of Natural Language Processing Techniques to Improve Students’ Revision of Evidence Use in Argument Writing: 2202347

Principal Investigator: Diane Litman
CoPrincipal Investigator(s): Richard Correnti, Lindsay Clare Matsumura
Organization: University of Pittsburgh
NSF Award Information: Collaborative Research: Development of Natural Language Processing Techniques to Improve Students’ Revision of Evidence Use in Argument Writing
Abstract:
Writing is foundational to learning in multiple disciplines. It is a critical process by which students make sense of ideas – particularly from source texts – and bring them to bear to demonstrate their emerging understanding of concepts and to make sound arguments. Recognizing the importance of argumentative writing, multiple educational technologies driven by natural language processing (NLP) have been developed to support students and teachers in these processes. However, evidence is modest that such systems improve writing skills, and this is especially the case for younger students. One reason is that NLP technologies have only recently matured to the point that it is possible to provide feedback keyed to the content of students’ writing. A second reason is that many students lack the strategic knowledge and skills needed to revise their essays even after receiving writing feedback. An educational technology that assesses students’ skill at revising their writing and that provides feedback on their revision attempts would support the development of this critical skill, while placing no additional burden on teachers. Such a technology has the potential to prepare a new generation of students for productively writing and revising argumentative essays, a skill they will need in order to be prepared for the educational and workplace settings of the future.

To address the limitations of existing educational technologies for writing, the research team will develop a system that leverages NLP to provide students with formative feedback on the quality of their revisions. The team will 1) develop and establish the reliability and validity of new measures of revision quality in response to formative feedback on evidence use, 2) use NLP to automate the scoring of revisions using these measures, 3) provide formative feedback to students based on the automated revision scoring, and 4) evaluate the utility of this feedback in improving student writing and revision in classroom settings. The team hypothesizes that such a system will improve students’ implementation of feedback messages on text-based argument writing, leading toward more successful revision and ultimately more successful writing. For learning researchers and educators, the revision quality measures will provide detailed information about how students implement formative feedback. Few summative or formative assessments currently exist that provide this type of information. For technology researchers, the automated revision scoring will extend prior writing analysis research in novel ways, e.g., by assessing the quality of revisions between essay drafts and by incorporating alignment with prior formative feedback into the assessment. Multiple types of NLP models will be developed to examine tradeoffs between model type and differing evaluation dimensions such as reliability, transparency, and fairness.

This award reflects NSF’s statutory mission and has been deemed worthy of support through evaluation using the Foundation’s intellectual merit and broader impacts review criteria.

Tags: , , ,