Peer Grading: A Student Perspective in an Open and Online Course

In this post I share my peer grading experience as a student in the e-learning and digital cultures course [edcmooc] offered through Coursera. I’ll provide readers a window into the student experience how it works, guidelines provided by the instructors and assignment criteria. I’ll also share the assignment I submitted for this course and share the results—grades and comments provided by four students that evaluated my digital artefact.

Holding Blank Score CardsMy last post delved into peer grading, the pedagogy and the learning theories behind the process of peer grading. I thought readers may find it useful to view the experience from the inside, viewing the process as a student would.

Description of Assignment: A Digital Artefact
Within the five-week course, topics included, a) what it means to be human in a digital world, b) utopian and dystopian views of our world past, present and future, and c) how learning is influenced by technology in today’s digital culture. There was one assignment for the course, an artefact [artifact spelled the British way is with an ‘e’], a digital presentation representative of two or more concepts from the course, as described below:

Screen Shot 2013-03-05 at 6.21.47 PM
Description of assignment for #edcmooc from the course web site. Below this introduction on the page within the course website, were further detailed directions and guidelines, including how long the assignment should be, suggestions for platforms to use, i.e. Voicethread, Pixton, Prezi, etc. possible topics, and assignment criteria which in turn was used for grading purposes.

I learned far more than I expected from the process of completing the assignment, and from the peer grading exercise itself. It was engaging, quite enjoyable, and if we use the activity on the social networks as any indication, numerous students appeared to feel the same as I did. Discussions on Twitter @#edcmooc were prolific and are still going strong.

Screen Shot 2013-03-07 at 8.07.18 PM
Screen Shot of Twitter conversation in #edcmooc happening after the evaluation.

Student Enthusiasm for Peer Grading
Students appeared highly engaged, excited about the results of the assessments, theirs and others during the three-day evaluation period. Students shared on the courses’ Facebook page, they Tweeted, they discussed, and posted questions seeking advice about grading. Peer grading seemed to be taken quite seriously by active students.

Assignment Criteria
Often neglected in online courses are clear and specific descriptions provided about class assignments, the why, the how and the purpose. In my experience working with faculty in designing online courses, writing the narrative to cover these points requires time and attention to detail, but is well worth the time it takes, and instructors in #edcmooc followed these principles to a tee. One example is the assignment criteria:

“These are the elements peer markers will be asked to consider as they engage with your artefact. You should make sure you know how your work will be judged by reading these criteria carefully before you begin.

  1. The artefact addresses one or more themes for the course
  2. The artefact suggests that the author understands at least one key concept from the course
  3. The artefact has something to say about digital education
  4. The choice of media is appropriate for the message
  5. The artefact stimulates a reaction in you, as its audience, e.g. emotion, thinking, action” [Coursera, e-learning and digital cultures]

The GradingHow it Worked
The instructions provided on how to grade were thorough, and once I started the process of grading, the system guided me through following the assignment criteria closely. From the course website, with use of screen shots:

What you have to do
“When you have submitted your own artefact, the system will give you access to three other artefacts created by your peers, on which we ask you to provide feedback, and to offer your evaluation. This feedback will take the form of numbers and comments. It will involve the following steps for each”. Following this paragraph where further descriptions on how to make comments, (provide reasons), how to give and receive feedback, and encouraged further discussion and sharing on social media platforms after the close date of the assignment and included links for further reading on the peer review process and critical thinking.

Screen Shot 2013-03-02 at 3.50.45 PM
Screen shot: The first step we had to do in grading was, upon reviewing the artefact, was to give feedback according the criteria for this assignment.
Screen Shot 2013-03-02 at 3.50.53 PM
Screen Shot: The next step was assignment a grade following the above scale.
Screen Shot 2013-03-02 at 3.51.01 PM
Screen Shot: In the final step we were given the opportunity to write a synopsis,

My Artefact and Peer Feedback
My artefact which I submitted for grading focused on the theme of ‘being human in a digital world’ and included the concepts discussed in the coursehumanism, posthumanism and transhumanism specifically. I used the platform of Pinterest [which I joined some time ago, but didn’t use until this assignment]. I was pleasantly surprised at how effective this tool was; I was able include a fair bit of text to describe and summarize the concepts with image files, or embedded YouTube clips.

Screen Shot 2013-03-10 at 6.45.32 PM
Screen shot of my Digital Artefact on Pinterest. Click the image to view the board.

Peer Feedback
The quality of feedback I received from the student graders was overall very good. I was impressed with the comments, the insight and depth given the assignment criteria.  Also of note what how peer #2 mentioned reviewing the board helped him or her to ‘conceptualize the concepts’. This is an example of how peer grading can enhance learning for students.

Screen Shot 2013-03-03 at 10.46.38 PM
Screen Shot: Peer Feedback
Screen Shot 2013-03-10 at 6.56.37 PM
Screen shot: Final score out of 2.
Screen Shot 2013-03-03 at 10.46.55 PM
Screen shot: Final comments

Conclusion
I wrote in a previous post, A Tale of Two MOOCs @ Coursera, how the e-Learning and digital cultures format was an excellent example of a connectivist learning environment; a student focused learning community where students learn through making connections within a network. The digital artefact assignment and peer grading method were excellent choices in keeping with the connectivist coursestudents not only made connections within social networks throughout the course, but the peer review process served as a means to further conceptualize learning, expand personal connections beyond the class network, and prompted students to share their work with peers after the formal grading process using their real identities. The value of peer grading in this course went far beyond the grade and feedback each student received on his or her assignment; it created opportunities for learning that traditional grading could never provide. It was a brilliant fit for this course.

Links to #edmooc discussions and Final assignment Sharing

Why and When Peer Grading is Effective for Open and Online Learning

Is peer grading an effective assessment method for open and online learning? What about in MOOCs where student feedback may be the only means of determining a pass or fail in a course? This posts examine peer grading and suggests what conditions must be present in order for peer grading to be effective.

Keeping Score for the TeamAfter I wrote the outline for this post I came across this essay, by history professor Jonathon Rees, Why Peer Grading Can’t Work. The title was in stark contrast to my views on peer grading, but I  incorporated Rees’ argument here as it is worth consideration. Rees is also author of a blog I follow, More or Less Bunk where he writes about current issues within Higher Education often with a slice of sarcasm. Our views on online learning couldn’t be more dissimilar, yet I appreciate Professor Rees’ perspective and enjoy reading his posts. In his essay Rees shares his views on peer grading, which he experienced while taking a world history course as a student through Coursera. Rees’ central argument is that peer grading can’t and won’t be effective in grading written work produced within MOOCs, as the majority of students-as-graders are not able to provide quality feedback that can help students develop their writing and critical thinking skills.

In this post I’ll examine the conditions that need to be present for peer grading to work,   factors that can sabotage the process, and I’ll address the points put forth in Rees’ essay.  I’ll also explore briefly what is at the root of the differing views on peer grading, which I suggest is based on differing perspectives on learning philosophies [which I wrote about it in my last post, The Tale of Two MOOCs: Divided by Pedagogy].

Conflicting Views on How People Learn
At the root of the dissension over peer grading is the conflicting view on how people learn. One of Rees’ comments within the essay “Professors in the trenches tend to hold their monopoly on evaluating their students’ work dearly, since it helps them control the classroom better by reinforcing their power and expertise,” supports a cognitive and instructor-focused learning orientation. The concept of peer review, which leaves for the most part the instructor out of the equation, aligns with the social constructivist learning orientation. There is strong support in constructivist theories for the peer review which is grounded in student-centered learning where students learn as much from the review process itself as from the final grade on an assignment.

A paper on peer review published in 2007 described how the idea of peer review is embedded in the philosophies of learning theorists. The authors call out Vygotsky and his beliefs that learning occurs in, and is mediated by, social interaction. Authors do present the downsides of the peer review process, though at the conclusion of their research they determined that students involved in peer review perform better academically than peers graded only by their instructors (Lu & Bol, 2007).

8220513675_3c7aaea7b2

Peer Grading @ Coursera
When developing a course there are numerous assessment methods instructors can choose from, yet the choice of the tool and method depends upon the learning conditions within the course that include—the planned learning outcomes and the learning environment. An analysis of these conditions determine the best assessment strategy [which may include several methods in one course] for the course and its objectives. Peer grading worked well in the Digital Cultures I completed recently with Coursera in consideration of the learning context—the environment, topic and goals of the course. Also the course was not-for-credit and only five weeks in length.

In Rees’ history course, peer grading was used to evaluate essays, which appeared to be the primary method for assessment. Given the topic of Rees course, essays could be considered an appropriate assessment mechanism given the number of students and the availability of the software available that can facilitate peer grading [see link at the end of this post on Calibrated Peer Review]. Rees admitted the guidelines were clearly outlined as to how to grade, and that the grades he received were accurate, yet it was the quality of comments that he felt was lacking,

For me at least, the primary problem with peer grading lay in the comments. While I received five comments on my first essay, for every subsequent essay I received number grades with no comments from a minimum of two peers and as many as four…Every time I did get a comment, no peer ever wrote more than three sentences. And why should they? Comments were anonymous so the hardest part of the evaluative obligation lacked adequate incentive and accountability. (Rees, 2013)

What can Go Wrong: The Loafers and Others
What Rees shares here is a good example of what can go wrong with peer grading in anonymity in a massive course. Though the algorithms within a peer grading system may work, the problem lies in the uncontrollable learning conditions inherent to a massive course. One example is outlined in Lu & Bol’s paper I quoted earlier, which is the phenomenon of social loafing or hide-in-the crowd behaviours associated with anonymity. Students that fell into this group were physically and cognitively lazy, not contributing to the process as required. This phenomenon was referenced in several other research studies within the paper. I suggest another group be added to the mix besides the loafers— students that cannot provide feedback due to the lack of necessary skills, whether it be education background or language.

iStock_000019698408XSmallWhen Peer Grading is Effective
Peer grading has tremendous value for a variety of learning situations in higher education, though it requires a specific set of learning conditions to be present in order for it to work as intended. Listed below are a list of conditions needed to ensure that peer grading  is effective: 

1) When learners are at a similar skill level.
2) When assignments are low stakes [i.e. when a course is taken for professional development of personal interest as was the Digital Cultures course].
3) Where credit is not granted.
4) When learners are mature, self-directed and motivated.
5) When learners have experience in learning and navigating within a networked setting [if the review is completed in an open and online setting].
6)  Learners have a developed set of communication skills.

The break down in peer grading occurs when the learning environment cannot provide the conditions as mentioned above. Also, there are other factors that can sabotage its effectiveness, including an assignment that requires a high level of critical thinking skills, or when there are students in the mix that are non-participative, or have intentions that don’t align with the course. In my Coursera experience for example, with Digital Cultures, one of the artifacts I was to evaluate in the peer grading process [which was a website] was a marketing pitch. This happened to at least one other student, according to the course Twitter stream.

Closing
Peer grading has great value. It has proven to be effective in variety of education settings. It can work well in MOOCs that are not for credit, when the assignment lends itself to a peer review, such as the digital artifact we graded in our Digital Cultures course. It can also be very effective in small, closed online classes where students are at similar skill level and receive instruction and guidance in how to grade within the process. Yet there are times when it won’t work, this is where I agree with Professor Rees, the situations where students do need detailed and constructive feedback from an instructor, or mentor that is qualified. Furthermore, there are many students that need remedial support in writing and communications skills, some require support in how to learn online, and how to be responsible for their own learning.

Further fine tuning is needed to address some of these issues within MOOCs. I see the opportunity for two or three tracks within a MOOC for students wanting to participate at varying levels, which would address some of the issues with peer grading by addressing some of the required learning conditions as mentioned. Another suggestion is to offer resources for skill development for students requiring help with their writing skills within the course itself, i.e. links to Purdue’s writing center.  Perhaps, addressing students that receive a grade below a certain level in the peer review after the course closes with suggestions for additional writing skill development would be helpful. There are many options still to be explored. Time will tell.

Update: Response post from Professor Rees on More or Less Bunk, here.

References:

Peer Grading in Online Classes: Does it Work?

Is student grading good enough to use? A loaded question – and though it is context-dependent, the answer is yes. Yesterday I peer-graded six student midterm exams in the Introduction to Sociology course I’m taking through Coursera, along with 30,000 other enrolled students. In this learning environment, also known as a MOOC, peer grading is the only option, though it did motivate me to research its effectiveness and applicability to online courses for credit (the context for this post). Intrigued, I could see the potential in light of the online program at my workplace, not only from a time-saving standpoint for the instructor, but also for the potential in enhancing the student learning experience.

Enhanced Learning
I see the value of peer grading for what the student gets out of it, more than for the time it saves the instructor. I experienced this value first hand as mentioned above. The exam in question consisted of two short answer questions each requiring 250 words, and an essay response requiring 750 words. Though tedious [90 minutes of my undivided attention], I was far more familiar with the course concepts after the exercise, it was well worth the time.

Several of my classmates reported the same phenomenon via the discussion board – the deeper learning experienced while grading their peers. Below are snippets from select posts on the discussion board from the Coursera Introduction to Sociology course:

“…I actually enjoyed the peer review part mostly because of looking at different answers, giving me more perspective from an group of people all from different cultures. Not only that, but I felt I was able to be unbiased while grading my own work ….” Greg

I’ve learned a lot through the peer assessment, and even though maybe the scores will not be perfect, everybody who goes through that process will have now a better and more complete understanding of the first half of the course.” Horacio

For what it’s worth, one of the essays I corrected didn’t seem too good on first reading but when I checked it against the rubric, lo and behold, most of what was required was there.” Ron

“In my experience as an educator for over 25 years, it’s often the case that we think our grades should be higher than they are. ….the rubric was great, very thorough and complete…” Kendall

Accuracy
How accurate is peer grading? Another interesting question, and it depends upon the perspective. The viewpoint that most educators are familiar with, is where the instructor’s grade is the benchmark – his or her grading is the standard for measuring accuracy.

This is the method used in a research study by two biology professors at the University of Washington which determined that on a per-question basis, students were more generous in grading, actually 25% more, “0.27 points—roughly a quarter point on each 2-point question”. However, despite the differing grades, authors support peer grading and suggest further research be done to examine its value and the role peer grading can play in enhancing student learning. (Freeman & Parks, 2010).

Mechanics
How can student grading be effective within an online environment? Effectiveness depends upon the thoroughness and specificity of the rubric. A grading tool that guides the student through assessment of short answer and essay questions is critical. Below is an example from the research study mentioned earlier [a biology course was used for the  study]. There may be five or six of these questions for each point within a given question.

Sample answer: If the two species mate on different fruits,
then no gene flow occurs and they are reproductively isolated.

Rubric

  • Full credit (2 pts.): Clear articulation of logic that mating on different fruits reduces or eliminates gene flow—a Prerequisite for speciation to occur.
  •  Partial credit (1 pt.): Missing or muddy logic with respect to connection between location of mating and gene flow, or no explanation of why reductions in gene flow are Important.
  • No credit (0 pts.): both components required for full credit missing; no answer; or answer is unintelligible.  (Freeman & Parks, 2012)

Recommendations
To begin peer grading in an online course consider the following:

  • Create your own rubric that provides standards for each point the question is worth. For instance if a given questions is worth 6 points, 6 statements will need to be developed, similar to the one above.
  • Create detailed instructions for students that clearly outline how peer grading will work.
  • Set-up the process so that students grade a minimum of 3 peers’ assignments/exams and self-grade his or her own.
  • Average the peer scores and include the student’s self-graded assignment.

Though creating a peer review exercise is time consuming at the outset, rewards are tremendous.  First, for the potential time saved by the course instructor in grading, and second, [perhaps the most important] is the value that peer grading provides for the student.

References:
Freeman, S. & Parks, C. How accurate is peer grading (2010). CBE—Life Sciences Education. Vol. 9, 482–488.

Resources:
Peer Review, Peer Grading, JaZahn
UCLA’s Calibrated Peer Review, Eric Mazur.  This is a software program/platform that can handle and support peer grading for large classes and/or institutions that plan to implement peer grading in several classes.

Online groups – Cooperative or Collaborative?

“Work teams Cooperate; learning teams Collaborate

What is the difference between collaborating and cooperating? Online communities and group work in particular has generated much discussion lately, and I’ve written several posts about group work, peer evaluations and more. Interesting, though the definitions differ ever so slightly, [cooperate: the process of working together to the same end, versus collaborate: to work jointly on an activity to produce or create something] yet how each is executed in the online learning environments differs significantly.

I’ve experienced both as a student in online communities – there is a stark contrast between the two – the process, experience and outcomes were all different. Most group work happening online today is likely cooperative in nature. Cooperative group work is not a negative – essentially students are engaging at a different level of cognitive skills (in context of Bloom’s Taxonomy). When online groups cooperate they apply, plan, develop. When collaborating, students analyze, synthesize and construct knowledge, problems are solved collectively. Higher order thinking skills are engaged.

Cooperative

When virtual [online] groups cooperate, it’s a ‘divide and conquer’ approach, usually each group member is responsible for completing his or her ‘section’, which usually involves discussion and negotiation. From this point on, the work is done individually, and an ambitious (and gracious) team member puts all the various sections together and attempts to create a common ‘voice’ and consistency.

How do you create Collaborative (or Cooperative) group activities?

As most online educators know, creating virtual teams, and placing students into groups within the online learning platform, and providing assignment guidelines does not make cooperation or collaboration happen. From experience both as a student and as instructional designer, the type of interaction and learning (and success) of the group experience depends in a large part on the instructional strategy. A good place to start is by asking the question – ‘what learning objective does the assignment need to achieve’?  It is at this level that the instructor determines what kind of activity can be developed, and which approach is most effective in context of the learner (i.e.level of course, experience with online format etc.), and online environment. Choosing what one wants the student to do to achieve the objective, (i.e. synthesize or analyze) drives the instructional strategy, in that the group activity is constructed incorporating actions around the content to be learned or problem to be solved. See Bloom’s taxonomy below for ‘learning in action’ verbs.

SVG version of http://commons.wikimedia.org/wi...
SVG version of http://commons.wikimedia.org/wiki/Image:Bloom%27s_Rose.png by John M. Kennedy T. (Photo credit: Wikipedia)

Can Collaboration work in online environments?

Several educators have suggested that given the barriers of space and time, collaborative work in groups online is virtually impossible. I disagree, challenging – yes, impossible, no. That being said,. according to research it is how the the group task is structured, communicated and supported — that collaboration happens, thus higher order thinking skills are engaged (Paulus, 2005).

Collaborative learning – closing thoughts…

  • Learning happens in the dialogue, the conversation the problem solving (or not solving)
  • When groups come together to solve a problems, they need to use online tools to collaborate, Skype, Google +, Google Docs, Elluminate Live., and need to be introduced to the tools early in the course and have time to practice with them
  • Instructor support for students ‘dialoguing’, is critical to collaboration – this may mean professor prompting discussions among groups and/or providing encouragement and further direction to students at the beginning of the group process.

Related Posts
The Difference between Collaboration and Cooperation, antecdote.com
Why we need Group work in Online Learning, onlinelearninginsights
Making Peer Evaluations work in Online Learning, onlinelearninginsights
Teaching and Learning at a Distance, Collaborative vs Cooperative

Reference:
Paulus, T. M. (2005). Collaborative and cooperative approaches to online group work: The impact of task type. Distance Education, 26(1), 111-125. doi:10.1080/01587910500081343

Making peer evaluations work in Online Learning

This is the final post in a 3 part series on group work in online learning communities. Post 1 featured why we need group work in online learning, post 2 –  strategies for making groups work and in this post we’ll explore the topic of assessing group work in an online college-level course.

What are we assessing and Why?
I like to start with the obvious – what are we evaluating and why? Though the ‘why’ seems apparent, on the surface at least – evaluation in a college course whether online or not, requires an assessment component in order to demonstrate that the student as acquired knowledge in light of the instructional objectives. In this instance we are applying assessment collectively, which adds a layer of complexity. How well did the team project meet the learning goal of the assignment and what about individual contributions within? Do we need to evaluate this dimension? Many educators appear to agree with the ‘peer evaluation’ or ‘peer assessment’ concept for group work. Peer assessment allows the group members to provide a score or some kind of measurement on team members levels of participation and contributions. Rather than assessing whether the student learned from the assignment or not, this method seems geared to identifying any ‘slackers’ or those who sit on the side lines through the entire project, with minimal contributions.

Peer Evaluation – the ultimate expression of individualism?
Peer evaluation is not a concept exclusive to online learning. Numerous higher education institutions use peer evaluations as a scoring mechanism as part of the overall grading strategy for a given class involving group work. This sample peer evaluation form is available on Penn State’s World Campus resources, which appears to be used for face-to-face classes on a consistent basis.

I have mixed feelings about peer evaluations, leaning towards not using peer reviews as part of the assessment strategy. I wonder if the concept of peer evaluation is exclusive to higher education institutions in the USA? In considering the theory of collectivism vs. individualism, the US is extreme — on the far end of individualistic spectrum, which is perhaps why the concept of evaluating one’s peers contributions seems to be a ‘given’ in the academic setting. There are many examples of academics supporting the concept of peer evaluation. An academic paper has this to say, “Most group work is assessed by giving every individual the same grade for a team effort. However this approach runs counter to the principles of individual accountability in group learning…. difficult to determine the individual grades for work submitted by the group.” (Lewis, 2006). I respectfully suggest that this professor is missing the point of a collaborative activity.  I digress. Let’s move onto the how-to of assessment for these ‘collective’ assignments.

Grading Strategy
There are 3 main grading strategies to evaluating group assignments in online college classes that I’ve experienced firsthand.

  • Peer Evaluation with team grade. What appears most common is incorporating two grading components —  a team grade AND a grade allocated for the peer evaluation, the latter usually accounting for a small percentage of the total assignment. How it works – each group member completes an evaluation on his or her team members which is then submitted to the instructor. The instructor usually takes the average of the peer evaluations, and shares this grade with each team member which serves as the student’s grade in the peer evaluation portion. In principle, team members do not see any peer evaluations completed by their peers (though there is a case for sharing these). For example, in one of the classes I am taking now, we have a scoring table which where I will evaluate my 3 other group members, and myself. Below a copy of the actual evaluation that each team member completes:

Of course the point value of peer evaluation is unique to each situation, as determined by the instructor. Though, my experience is that the points do not motivate the student to participate in the project on the front end, but more allows the other group members to express his or her dissatisfaction with the other group members lack of participation or cooperation. I do not recommend including an option on the peer evaluation for team members to make comments about their peers. Should team members have negative comments to make about peers, this tool is not a constructive venue. Should negative comments be made on peer reviews about team members, instructors should not share these comments with the group member, but have a Skype meeting or conference call with one or more members if deemed necessary.

  • Team grade only. The second option, that several professors at my workplace use  is to assign a team grade, but not to use peer evaluations. Granted the assignment is small, only contributing 10% towards the final grade, however, the instructor monitors participation by viewing each groups’ discussion board within the LMS. In cases with a non-participating group member, he intervenes with an email to the student. Alternatively he will address the entire class in his weekly professor news posts and remind students about the need for participation. Overall this assignment works well, though perhaps a contributing factor to its success, is the size of the groups which are usually limited to 4 participants, and often are as small as 3 team members.
  • Self evaluation and team grade. This is my preferred approach. I believe the learner will benefit far more by completing a self evaluation (that is well crafted to include focused self reflection questions) that forces him or her, to examine how he or she contributed [or did not] to the group process. The tool also encourages the student to consider actions that he or she demonstrated to support the team and to estimate what percentage of the work he or she contributed to the project.  ‘Forcing’ the individual student to assess their own behaviour, as opposed to others is more constructive – it supports the aim of developing collaboration skills, along with the knowledge component.

Evaluating the Team AssignmentUse a Rubric
I barely touched on the use of rubrics, which is the tool I suggest for evaluating the completed team project itself. Effective group collaboration begins with a well defined assignment that has clear goals and expectations. A well written rubric not only helps the facilitator score the assignment but it and can greatly increase the quality and effort put into assignments by giving students a clear expectations with knowledge that must be demonstrated. I could write to great length about rubrics, but some other individuals have done a far better job than I ever could.  That being said, I have provided links to several resources for creating or adapting your own rubric.

Resources for Rubrics:
http://www.cmu.edu/teaching/designteach/teach/rubrics.html
Blog- e-learning blender – group project design
Click here to download a rubric template from Microsoft

References
Lewis, K. (2006). Evaluation of online group activities: Intra-group member peer evaluation. Retrieved from http://www.uwex.edu/disted/conference/Resource…/45796_2011.pdf

Keep Learning 🙂