Is peer grading an effective assessment method for open and online learning? What about in MOOCs where student feedback may be the only means of determining a pass or fail in a course? This posts examine peer grading and suggests what conditions must be present in order for peer grading to be effective.
After I wrote the outline for this post I came across this essay, by history professor Jonathon Rees, Why Peer Grading Can’t Work. The title was in stark contrast to my views on peer grading, but I incorporated Rees’ argument here as it is worth consideration. Rees is also author of a blog I follow, More or Less Bunk where he writes about current issues within Higher Education often with a slice of sarcasm. Our views on online learning couldn’t be more dissimilar, yet I appreciate Professor Rees’ perspective and enjoy reading his posts. In his essay Rees shares his views on peer grading, which he experienced while taking a world history course as a student through Coursera. Rees’ central argument is that peer grading can’t and won’t be effective in grading written work produced within MOOCs, as the majority of students-as-graders are not able to provide quality feedback that can help students develop their writing and critical thinking skills.
In this post I’ll examine the conditions that need to be present for peer grading to work, factors that can sabotage the process, and I’ll address the points put forth in Rees’ essay. I’ll also explore briefly what is at the root of the differing views on peer grading, which I suggest is based on differing perspectives on learning philosophies [which I wrote about it in my last post, The Tale of Two MOOCs: Divided by Pedagogy].
Conflicting Views on How People Learn
At the root of the dissension over peer grading is the conflicting view on how people learn. One of Rees’ comments within the essay “Professors in the trenches tend to hold their monopoly on evaluating their students’ work dearly, since it helps them control the classroom better by reinforcing their power and expertise,” supports a cognitive and instructor-focused learning orientation. The concept of peer review, which leaves for the most part the instructor out of the equation, aligns with the social constructivist learning orientation. There is strong support in constructivist theories for the peer review which is grounded in student-centered learning where students learn as much from the review process itself as from the final grade on an assignment.
A paper on peer review published in 2007 described how the idea of peer review is embedded in the philosophies of learning theorists. The authors call out Vygotsky and his beliefs that learning occurs in, and is mediated by, social interaction. Authors do present the downsides of the peer review process, though at the conclusion of their research they determined that students involved in peer review perform better academically than peers graded only by their instructors (Lu & Bol, 2007).
Peer Grading @ Coursera
When developing a course there are numerous assessment methods instructors can choose from, yet the choice of the tool and method depends upon the learning conditions within the course that include—the planned learning outcomes and the learning environment. An analysis of these conditions determine the best assessment strategy [which may include several methods in one course] for the course and its objectives. Peer grading worked well in the Digital Cultures I completed recently with Coursera in consideration of the learning context—the environment, topic and goals of the course. Also the course was not-for-credit and only five weeks in length.
In Rees’ history course, peer grading was used to evaluate essays, which appeared to be the primary method for assessment. Given the topic of Rees course, essays could be considered an appropriate assessment mechanism given the number of students and the availability of the software available that can facilitate peer grading [see link at the end of this post on Calibrated Peer Review]. Rees admitted the guidelines were clearly outlined as to how to grade, and that the grades he received were accurate, yet it was the quality of comments that he felt was lacking,
For me at least, the primary problem with peer grading lay in the comments. While I received five comments on my first essay, for every subsequent essay I received number grades with no comments from a minimum of two peers and as many as four…Every time I did get a comment, no peer ever wrote more than three sentences. And why should they? Comments were anonymous so the hardest part of the evaluative obligation lacked adequate incentive and accountability. (Rees, 2013)
What can Go Wrong: The Loafers and Others
What Rees shares here is a good example of what can go wrong with peer grading in anonymity in a massive course. Though the algorithms within a peer grading system may work, the problem lies in the uncontrollable learning conditions inherent to a massive course. One example is outlined in Lu & Bol’s paper I quoted earlier, which is the phenomenon of social loafing or hide-in-the crowd behaviours associated with anonymity. Students that fell into this group were physically and cognitively lazy, not contributing to the process as required. This phenomenon was referenced in several other research studies within the paper. I suggest another group be added to the mix besides the loafers— students that cannot provide feedback due to the lack of necessary skills, whether it be education background or language.
When Peer Grading is Effective
Peer grading has tremendous value for a variety of learning situations in higher education, though it requires a specific set of learning conditions to be present in order for it to work as intended. Listed below are a list of conditions needed to ensure that peer grading is effective:
1) When learners are at a similar skill level.
2) When assignments are low stakes [i.e. when a course is taken for professional development of personal interest as was the Digital Cultures course].
3) Where credit is not granted.
4) When learners are mature, self-directed and motivated.
5) When learners have experience in learning and navigating within a networked setting [if the review is completed in an open and online setting].
6) Learners have a developed set of communication skills.
The break down in peer grading occurs when the learning environment cannot provide the conditions as mentioned above. Also, there are other factors that can sabotage its effectiveness, including an assignment that requires a high level of critical thinking skills, or when there are students in the mix that are non-participative, or have intentions that don’t align with the course. In my Coursera experience for example, with Digital Cultures, one of the artifacts I was to evaluate in the peer grading process [which was a website] was a marketing pitch. This happened to at least one other student, according to the course Twitter stream.
Peer grading has great value. It has proven to be effective in variety of education settings. It can work well in MOOCs that are not for credit, when the assignment lends itself to a peer review, such as the digital artifact we graded in our Digital Cultures course. It can also be very effective in small, closed online classes where students are at similar skill level and receive instruction and guidance in how to grade within the process. Yet there are times when it won’t work, this is where I agree with Professor Rees, the situations where students do need detailed and constructive feedback from an instructor, or mentor that is qualified. Furthermore, there are many students that need remedial support in writing and communications skills, some require support in how to learn online, and how to be responsible for their own learning.
Further fine tuning is needed to address some of these issues within MOOCs. I see the opportunity for two or three tracks within a MOOC for students wanting to participate at varying levels, which would address some of the issues with peer grading by addressing some of the required learning conditions as mentioned. Another suggestion is to offer resources for skill development for students requiring help with their writing skills within the course itself, i.e. links to Purdue’s writing center. Perhaps, addressing students that receive a grade below a certain level in the peer review after the course closes with suggestions for additional writing skill development would be helpful. There are many options still to be explored. Time will tell.
Update: Response post from Professor Rees on More or Less Bunk, here.
- Peer Grading Can’t Work, (2013), Rees, J. Inside Higher Ed
- Pedagogical Foundations, Coursera
A Comparison of Anonymous Versus Identifiable e-Peer Review on College Student Writing Performance and the Extent of Critical Feedback (2007), Lu & Bol, Journal of Interactive Online Learning
Calibrated Peer Review, The Regents of the University of California
- SWoRD™ Peer Review, Panther Learning