Computer-Assisted Peer Assessment
Assessment is basic to determining how we know what we know in many areas of life. It helps us decide whether a college degree is worth the cost, whether employees are worth their salary, and whether a scientific paper deserves to be published. Such decisions are based at least partly on feedback provided by people. The tremendous advances in computer science and information technology help us to gather and analyze this feedback.
In particular, it is now feasible to combine peer feedback with instructor feedback to assess academic work. Peer review has been used to assess student work for over 40 years. It has many advantages: Students receive feedback more quickly than they would from an instructor. They can receive more copious feedback, and get comments on drafts to help them improve the final submission. Not only that, but research has shown they also learn from assessing their peers.
Peer-review systems present a number of research challenges. They produce copious amounts of text, which must be conveyed to instructors in a way that makes sense, since no instructor wants to read hundreds of reviews. Students would benefit from seeing peer feedback at the point where it applies in their work, rather than in a few paragraphs at the end. Reviewers and authors could be allowed to comment on each other's comments. Natural-language processing (NLP) techniques could be used to merge the comments of multiple reviewers into a single narrative, while removing pejorative comments that might hurt the author without offering any advice on how to improve.
Peer reviewers can improve their feedback if given good guidance. NLP techniques can be used to read the feedback before it is submitted, decide whether it is relevant to the work being reviewed, whether it has a positive or negative tone, and whether it appears to provide sufficient guidance on how to improve the work. Then the system can present this information to peer reviewers, and allow them to use it to improve their review.
In large classes and MOOCs, peer review offers the possibility of assigning grades automatically, with limited if any instructor intervention. In order to do this, peer grades need to be more accurate than they are today. We are exploring automated techniques to determine which reviewers are reliable, and to use that information to determine whether peer-assigned grades are accurate. If they are not, extra reviewers can be assigned to deliver a reliable score.
By requiring students to think more deeply about their work, and by delivering copious and timely feedback, peer assessment promises to improve students' educational experience. Moreover, the lessons learned in the educational domain can be applied wherever peer feedback is used.
Other Research Areas
Dr. Gehringer also has research interests in the area of refactoring and object-oriented design. He has also done recent work in architectural features to support memory management.