This ‘Need-to-Know’ blog post series features noteworthy stories that speak of need-to-know developments within higher education and K-12 that have the potential to influence, challenge and/or transform traditional education as we know it.

The lab manual and its iPad version were created by Columbia medical students themselves. The manual is accessed on the iPad during the lab–the screen covered in plastic. The digital version also includes interactive quizzes.
1) Role Reversal: Students and Teachers
The Wall Street Journal published a report this week, Unleashing Innovation in Education dedicated to innovative programs and events happening in higher education and K-12. There are numerous articles and videos of interest—a breadth of topics, and a variety of media used to report the stories makes this a good read. One in particular caught my attention— medical students creating their own textbook; it illustrates the changing dynamic between student and teacher.
Here’s the reversal, students are now actors in their learning, not observers, and teachers at times become the observer as well as the actor. We can see this play out with these young medical students who created a digital textbook/manual, Columbia Clinical Gross Anatomy Dissection Manual. They saw a need to replace the textbook assigned for their anatomy class —”Grant’s Dissector” published over 60 years ago. The students, not happy with their spiral-bound Grant text, decided to create their own. And they did so, over the summer by taking thousands of pictures of the dissecting process. One of their professors advised them along the way. The students used Apple’s iBooks program to create the text, and embedded interactive quizzes within.
Insight: This example showcases the shift that is happening with students and learning. Students not only want to get involved in learning and be part of the curriculum, but expect to be able to do so. The role of the educator shifts in response, evidenced in this example by the professor that guided and advised the students during the process. The role of the professor is not minimized, but is changing. Instructors now need to adopt a variety of teaching styles in order to adapt to the student and situation.
- Learning Anatomy in a Digital Age, Jason Bellini, WSJ
2) MOOC Scorecard Says Much
Another article in the WSJ report worthy of review is An Early Report Card on Massive Open Online Courses. It covers the background of MOOCs well, the key players, the platforms etc. But most telling is the MOOC scorecard graphic which outlines student demographic data released by Canvas Network, Cousera and Udacity. Though it’s not a meta-analysis by any means as the numbers are stand-alone for each platform, nor is it statistically significant data that is generalizable, however there are patterns and themes that stand out. One in particular is the education level of students in Canvas Network data —over 75% of MOOC students in the sample held at least a four-year degree. This falls in line with other reports I’ve read outlining similar data on specific MOOC courses, which I described in a previous post.
Insight: MOOCs may not be best format for undergraduate and high school students. They still [in most cases] need guidance, feedback and instruction from a skilled educator. Learning in an open course requires a skill set that undergraduates develop while they are in college. This is where they should learn how to learn, get support and guidance from instructors, whether that be face-to-face or in small online environments.
- Graphic: An Early Report Card on Massive Open Online Courses, Geoffrey A. Fowler, WSJ

Figure 1 from Stanford report displaying image of linear regression representing over 40,000 student code homework submissions. Colours correspond to performance on a battery of unit tests.
3) Robo-Clustering for Student Feedback
Automated essay grading, where a computer grades an essay with programmed software is controversial. It’s been hotly debated among educators in the blogosphere over the last several months. Here’s another robo-type approach that’s not automated essay grading per se, but automated clustering of assignments which allows instructors to give feedback to students en masse.
From what I gather from reading the report by the program developers [Stanford professors] here’s how it works—student assignment submissions are entered into a database and clustered by similar response patterns. The professor then reviews and provides feedback on one assignment from a given cluster; this feedback is then broadcast to the rest of the students in the same cluster.
“Clustering submissions along key metrics is a natural way to reduce the amount of work required. The hope is that homework submissions within the same cluster are similar enough, that feedback for one member can be propagated to the rest of the cluster.” (Huang, Piech, Nguyen & Guibas, 2013)
The researchers suggest the application can be applied to other grading scenarios, i.e. AP exam grading, or to brick-and-mortar classes which give the same homework assignments over multiple sessions. I can [sort of] see the value of this program, though it seems a stretch to think it will be applicable to broader education contexts anytime soon—but the data sure makes beautiful artwork (see figure 1).
- Syntactic and Functional Variability of a Million Code Submissions in a Machine Learning MOOC, Jonathan Huang, Chris Piech, Andy Nguyen, and Leonidas Guibas
- Semi-automatic method for grading a million homework assignments, Ben Lorica