MOOCs, despite what the critics say have transformed higher education. They have spawned new vehicles for online learning, reaching new groups of learners who want and need an alternate form of traditional education. Thanks to the MOOC format learners now have numerous pathways to furthering their education. Granted, not all MOOC programs meet the definition of open, but as programs expand as we’ve seen with certificate-granting MOOCs, MOOCs for college credit, professional development MOOCs, and others, there is a pressing need for benchmarks of quality.
This need is more apparent after the recent publication of two reports: Babson’s thirteenth annual “Online Report Card” and “In search of quality: Using Quality Matters to analyze the quality of Massive, Open, Online Courses (MOOCs)”, (Allen, Seaman, Poulin & Straut, 2016; Lowenthal & Hodges, 2015).
The Babson report devotes (only) two of its sixty-plus pages to MOOCs, yet what’s most telling is the fact that the report is in its final year of publication. Though there are a variety of contributing factors, a compelling one as described in the report’s introduction, “distance education is clearly becoming mainstream” (pg. 3). In other words online education is growing up. ‘Online learning’ is simply becoming ‘learning’. The report outlines the number of organizations dedicated to online education, who report on and address issues specific to online learning. Many do address quality standards as it relates to developing and delivering online programs, as is the case with Quality Matters rubric, Online Learning Consortium’s (OLC) Five Pillars, and California State University Chico’s rubric for online instruction, yet all fall short in specifying standards for MOOCs.
This void is a concern given the number of students engaged in learning using the MOOC format which is significant. Estimates are in the range of millions—one source states there were 35 million enrollment in 2015 (“MOOC Enrolment”, 2016). Given these numbers the question is—how will MOOC learning be advanced and improved if MOOC quality isn’t addressed by organizations involved in online education? Researchers Lowenthal and Hodges bring some of these issues forward in their paper, “In search of Quality”. They apply the Quality Matters™ rubric to six MOOCs, offered by three providers, Coursera, edX and Udacity:
The six identified MOOCs were analyzed using the Quality Matters Rubric Standards with Assigned Point Values, which involves a type of content analysis by three different reviewers using a standard coding scheme. [Quality Matters] QM has a rubric for Continuing and Professional Development that would be appropriate to use on MOOCs (Adair et al., 2014). However, we intentionally chose to use QM’s higher education rubric rather than the continuing and professional development focused rubric because of the increased initiatives about offering college credit for MOOC completion. In other words, a MOOC should score as well as a traditional online course if it is going to be worth college credit. (Lowenthal & Hodges, 2015)
Not surprisingly, after the QM peer-review assessment all six MOOCs failed to meet QM’s passing grade of 85%. The QM rubric consists of a set of standards grouped into eight dimensions (below); in the study, most MOOCs failed in two dimensions, #5 and #7.
Course overview and introduction
Assessment and measurement
Learner interaction and engagement
Accessibility (Quality Matters, 2014)
The apparent failure of the MOOCs in this study may give fodder to MOOC critics, yet I suggest that failure stems not from the MOOCs, but from: 1) applying a tool (QM rubric) to a MOOC, which inherently serves a variety of learning purposes and needs, e.g. not just for credit, but for professional development, personal interest, etc. and 2) assessing a MOOC on dimensions such as ‘learner interaction and engagement‘ and ‘learner support‘ doesn’t make sense in context of a MOOC, specifically at the level the QM standards articulate. Considering the massive component of MOOCs, it’s almost a given that facilitating structured, mandatory engagement and active learning is next to impossible. Furthermore since MOOC students are able to choose the level of engagement based upon their learning needs, including this as a standard doesn’t fit with the intent of the course.
The study acknowledges many of these points, and serves as a vehicle for discussion about applying quality standards to courses that align with the MOOC format. The authors also highlight a critical point, if the MOOC format is used as a vehicle for granting college credit, as it appears to be, quality benchmarks are essential.
A unique approach to quality assessment (and course design) is needed; one that heeds the needs of learners, the constraints and advantages of the delivery platform, and ensures a quality learning experience. Going further, I also suggest that before establishing quality standards, institutions would do well to first identify the primary purpose and intent of the MOOC. Categorizing a MOOC based on its purpose, then establishing quality standards is a good place to start.
Lowenthal, P. R., & C. B. Hodges (2015). In search of quality: Using Quality Matters to analyze the quality of massive, open, online, courses (MOOCs). The International Review of Research in Open and Distributed Learning, 15(5). Retrieved from http://www.irrodl.org/index.php/irrodl/article/view/2348/3411
I recently listened to a lecture ‘The importance of place’ from a Great Courses series on Cultural and Human Geography. The professor discussed the process of ‘place-making’ that occurs when humans interact with, and modify a physical space to make it their own. The result is a distinctive place with a unique culture shaped by the people within. This got me thinking about ‘place’ in terms of online education. Do students view an online class as a ‘place’? If they do, how can educators make an online space into a place for learning?
Place in context of human geography has two elements—the physical characteristics of the natural environment and the human influences—ideas, interactions and interventions. There are several books on the concept of place including “Place: A Short Introduction” by Tim Cresswell, author and professor of human geography. Cresswell describes place not only as a location, but as a physical space where humans produce and consume meaning (2004). “Space and Place: The Perspective of Experience” by Yi-Fu Tuan, one of the first scholars in the discipline of human geography, describes place and space as a relationship, that includes the feelings and thoughts humans associate with each (2001).
Why Online Students Need a Place
When examining face-to-face education settings we get a sense of how place influences learning. Inherent to education is the idea of going to a place to learn—a place to study, to meet with classmates, to lecture, to engage in discussion, to do research, etc. And a place’s physical aspects—the decor of a room, the temperature, WiFi availability, space constraints, available resources, even furniture, can influence students motivation, their effectiveness or even willingness to learning. As do the human aspects associated with a learning place—interactions, discussions, institutional processes, etc. In my experience it’s these human aspects that are most often associated with online education. Yet a sense of place that addresses the physical aspects in an online course is more elusive. Elusive—but necessary.
How Do Educators Make a Place Online
Few resources exist for educators wanting to learn about place-making in an online class. The closest I’ve found is the Community of Inquiry (CoI) model; a framework for describing presence online. The CoI model encompasses three dimensions of presence for online learning spaces: teaching, social and cognitive presence. The model suggests that effective education experiences occur when the three dimensions coincide. This is a good starting point for place-making. It addresses the human aspect of place-making through the three dimensions. Yet as mentioned, addressing the physical dimensions of place-making online is more challenging.
Example 1: Creating Dynamic Online Course Sites with Digital Tools
Effective use of Images and media can contribute to place-making. An experienced online educator Laura Gibbs (@onlinecrslady) maintains a dynamic course site for each of her online classes (Mythology and Folklore) using embedded images and content representing students’ course work from various digital platforms. Not only do students get to see their work featured on the course site, but the course becomes a dynamic space for learning. When students log-on to the course site, the most recent student assignment is featured (screenshot below), which students can access via an active link. Gibbs’ use of media tools (Rotate Content, inoreader widgets, etc.) and platforms (Twitter, Pinterest, Blogger) support place-making by creating a learning space that has meaning and purpose for students. Students shape the space by their contributions, interactions and involvement—making it a ‘place’ for their learning.
Using Twitter as a tool to contribute to place-making in an online course.
Example 2: The Student Lounge
Some online courses feature a ‘student lounge‘, usually a discussion forum or chat room within the course site labeled as such. In my experience these lounges typically aren’t used, usually because there’s little invitation or reason to get involved—it’s just there. Yet there are ways instructors can make these spaces inviting and dynamic, to construct and lay the foundation for place-making. As we saw in the previous example, it takes a focused effort by the instructor to support place-making. Below are ideas for creating a student lounge from high school teacher Heather Wolpert-Gowan outlined in a blog post:
It is not hard to create a virtual student lounge in an online classroom. Think about what a student union provides and go from there. Make sure that students go there first, familiarize themselves with the space, and are encouraged to return to it time and again to recharge their social batteries. It should be a place where everyone can hang out and share, [by] posting, uploading and downloading. You can also create a “water cooler” thread that serves as just a fun way to build your community, introducing students to each other as members of a Virtual Learning Community (VLC.)…For instance, you might want to set up a space or thread that:
Links to resources or even a gift shop of materials and supplies that students can buy or access in order to supplement their learning
Allows students can share thoughts, writings, and videos
Serves as an “information desk” so students can ask questions about the classroom and/or the content
Acts as an “arcade” of links to content-related games and entertainment
Promotes and celebrates learners in the community for their academic or personal accomplishments both online and offline
The idea of ‘place’ in online education is an interesting one; it’s another way thinking about and approaching online education. The idea of place-making in online courses could be one way to engage and motivate students, to make learning relevant, meaningful and effective in our digital world. If you have ideas for place-making in online courses, readers would be interested—please share!
Cresswell, T. (2004). Place: A short introduction. Malden, MA: Blackwell Publishing.
Robbins, P. (2014). The importance of place. Lecture presented in Understanding cultural and human geography. Chantilly, VA: The Great Courses.
Tuan, Y. (2001). Space and place: The perspectives of experience. Minneapolis, MN. University of Minnesota Press.
The view that online education is “just as good as” face-to-face instruction was not widely held in 2003: 42.8% of chief academic officers reported that they considered the learning outcomes for online instruction to be inferior to face-to-face instruction. The view of online quality has improved over time. However results for 2013 revealed a partial retreat in faculty perceptions of online learning providing quality learning experiences. The 2014 results indicate that the retreat continues—there’s an increase in faculty that perceive online education as inferior. — Grade Level: Tracking Online Education in the United States, 2015
One of the main criticisms of online courses is they are of poor quality as revealed in the annual Babson study mentioned in the opening. Positive perception of online learning by faculty has declined in 2013 and 2014 (Allen & Seaman, 2015). Face-to-face courses appear to be the hallmark for quality when it comes to higher education. Yet this doesn’t seem fitting considering the ongoing and often heated public dialogue about the quality of higher education programs with little consensus on what quality is. In this blog post I suggest that online educators can and should tackle the quality issue in their own courses, and that they do so by assessing their course holistically. A holistic approach encompasses elements such as students’ perspectives, results over a period of time, artifacts created during learning, and the instructor’s course experience.
I also review recent research on quality assessment specific to online courses. I also examine existing frameworks and rubrics for online course assessment and explain why, even if an institution follows such standards, these are starting points. I outline five-steps that instructors can follow to assess whether a course is ‘good’—an assessment for quality that considers foundational elements, student perspectives, course artifacts, student and instructor learning experiences.
What is Course Quality?
Up until a few years ago ‘quality’ in higher education was measured by a course’s content, pedagogy and learning outcomes (Bremer, 2012). This approach has changed to a process-oriented system where a combination of activities contributing to the education experience are considered. Activities that include: student needs, use of data and information for decision-making, department contributions, as well as improved learning outcomes (Thair, Garnett, & King, 2006). This holistic approach of evaluating education experiences is often applied to the development and assessment of online learning. For example, Online Learning Consortium’s Five Pillars of Quality Online Education (below) and Quality Matters (QM) rubric.
Why Assessing Quality is Difficult in Online Education
Yet there are challenges associated with setting universal quality standards for online education, and though a starting point, a thorough quality assessment requires ongoing consideration of numerous elements, some that occur over a period of time. Key challenges with assessing quality through set standards are outlined in ‘What is online course quality‘ and include: 1) the lack of authoritative body (able and willing) to address minimum level of standards across all states with their accrediting bodies, 2) the challenge of creating a comprehensive, evaluative tool to address complexities of online courses, and 3) the implementation process itself given the significant resources that would be required to implement an institution-wide evaluation process (Thompson, n.d.).
Limitations of Quality Assessments
There are other limitations. Some assessments are inherently limiting with a prescriptive set of standards that may not fit all contexts. Another is the tendency to establish a minimum level of quality, ‘baseline standards’ which limits innovation and creativity (Misut & Pribilova, 2015). Most course assessments are done at a point-in-time and are unable to capture dimensions over the life of a course and post-course; dimensions that include student perceptions collected as formative feedback (mid-way through course) and end-of-course feedback surveys. Furthermore, quality assessments frequently focus on course/instructional design and fail to include learning experiences of the instructor and students.
What’s involved In a Good Course Assessment?
A holistic assessment goes beyond course design; it acknowledges the nuances that make a course unique, including input and contributions from students, developments in the field of study, and current events. Most valuable are students perceptions of their learning and of the course experience. A good course assessment considers the course over a period of time, and considers interactions between instructor and students, students and students, all of which create artifacts that can be studied and analyzed (Thompson, 2005). Artifacts might include, emails or forum posts of student questions, dialogue within forums, feedback from group interaction, end-of-course student surveys, LMS reports on student interaction patterns, student assignment results, and more. Course artifacts give valuable clues to a course’s quality, more so when collected from two or more course iterations and analyzed collectively.
Other elements to consider:
♦Student behaviours including questions asked in forums, emails, interactive patterns within LMS, interaction with resources, participation patterns within discussion forums, social platforms designated to course, etc. ♦Student perceptions evaluated through questionnaires, formative course feedback, post-course questionnaires, one-on-one interactions ♦ Knowledge creation/transfer by students evaluated through assignment analysis, course artifacts, post-course surveys ♦Course design as per rubric/assessment tool ♦ Use of current technology tools and platforms ♦Course data and artifacts from two or more sessions analyzed and compared ♦ Quantity and type of interaction between students and instructor
Five-Steps to Assessing Online Course Quality
1)Asses Using a Rubric or Other Tool to Consider Basic Course Elements
Assess course using the tool or framework employed by your institution e.g. Quality Matters rubric. If your institution does not have a tool in place I recommend the rubric created by California State University Chico which covers six domains. The rubric (embedded below) is free to use and download under the Creative Commons Attribution 3.0 United States License.
* Thanks to a reader’s comment – there is an updated version of the Chico rubric which is a checklist format with additional dimensions. It is similar to the Quality Matters rubric. I prefer the version embedded here — its more approachable given it’s less lengthy and rigid. Link here to the updated version.
2) Analyze Course from a Student Perspective
This is perhaps the most difficult yet useful element for improving course quality. There are a variety of ways to consider students’ perspectives, several already mentioned. Other recommendations—take an online course as a student (e.g. a MOOC) in a topic you aren’t familiar with. This provides an eye-opening view of how it feels to be an online student. Another method is to ask a colleague from another department to review your course and provide constructive feedback.
3) Assess Course Artifacts, Materials, & Feedback
Another useful exercise is analyzing course artifacts. Analyzing results from student feedback via a questionnaire midway through course is helpful. If a course is offered more than once, compare data from course iterations collectively. Consider, is student feedback incorporated into subsequent course re-runs? What about student-generated content? All artifacts and materials associated with a course are valuable material for assessing a course’s quality.
4) Consider Level and Type of Student-to-Student and Student-to-InstructorInteractions
Interaction is critical to an online course; students that feel connected, establish themselves as individuals within an online course are likely to have higher levels of motivation and learning satisfaction over those that don’t. Consider the forums, the interactive assignments where students can participate, the social exchanges within course-associated platforms, and other places for interaction. An example of assignments that encourage student feedback and involvement, leading to high levels of engagement can be found on this online instructor’s (Laura Gibbs) course site here. Also consider the Community of Inquiry model for the types of interactions in an online course that lead to positive learning experiences.
5) Results: Are Students Learning?
Evidence of learning is the most important assessment dimension, yet nearly impossible for a standardized quality assessment tool to evaluate. One could argue that before and after quizzes within a course can evaluate learning. I suggest that the instructor is able to assess at a deeper level whether or not learning occurred, can determine the level of critical thinking. This can be done only when assignments demand that students demonstrate what they know and are required to apply course concepts. Assignments that draw out students thinking by demonstration either through dialogue or written work allow the instructor evaluate learning effectively. There’s no formula for this fifth step, this is an example of customized course evaluation. But I suggest instructors evaluate student artifacts from one course to another and to consider what students learned and how well they articulated what they learned. There may be opportunity for revising assignments, activities or other course dimensions.
Assessing quality in online courses is complex as we’ve seen here, yet addressing quality is critical to advance the positive perception of online education for one, but more importantly to provide learning and teaching experiences that are rewarding, rich and meaningful. Quality assessment can start one course at a time, and who better to do this than the course instructor?
How much time does it take to teach an online course? Does teaching online take more or less time than teaching face-to-face? How much time does it take instructors to develop an online course? — Instructor Time Requirements to Develop and Teach Online Courses (Freeman, 2015)
A study released in March of this year set out to answer these burning questions that the majority of online educators would like answers to. There’s considerable anecdotal evidence that favors both sides—it takes more time versus less time to facilitate an online course when using a face-to-face course as benchmark. The purpose of this study was to nail down the facts—to measure the perceptions of and actual time spent developing and teaching online courses. The findings are significant for institutions and educators involved in online education for several reasons. Professional development for one. The report reveals areas where survey-respondents struggled during the course development phase, and where the majority of time was spent when facilitating (the conclusions are surprising). Secondly, results may be helpful for institutions when considering compensation and work-allocation models. Institutions can use the results as benchmark, at the very least the study may act as catalyst for constructive conversations about compensation and support for online course development and facilitation. And finally, it may help online instructors gain insight into their own teaching experiences by considering the experiences of other educators that have experience with face-to-face and online courses.
This post highlights the findings and suggests factors for educators to consider when it comes to, 1) the time spent developing online versus face-to-face courses, and 2) how much time is invested in online facilitation, and how it compares to face-to-face instructions.
To put the results into context—the survey gathered data from 68 instructors from a total of 165 solicited from three universities across eight academic disciplines. Each respondent had developed an average of 2.13 online courses and had experience teaching an average of 2.03 online courses, and had been teaching at the university level for an average of 14.2 years (Freeman, 2015).
1) Course Development Time: Pedagogical Learning Curve Steepest
Survey results confirmed that developing online courses is indeed more time consuming than developing face-to-face courses. Though the time required declines when the same instructor develops a second or third online course. Twenty-nine percent of respondents indicated they spend over 100 hours (median of 70 hours) to develop their (first) online course. This significant number of hours is likely due to the fact that 59% of respondents developed over 90% of the course without any assistance, which included developing content, assessments, assignments, and time associated with course design. The other 41% received course design support from instructional designer(s) and/or used ready-made content available through textbook publishers. Also significant is the technological learning curve which was found to be shorter than the pedagogical learning curve, in other words instructors required more time to determine how to implement pedagogical methods, how to create learning experiences and deliver content appropriate for the online format than they did learning about the features and nuances of the technology used to deliver the course. The learning curve is described as the time it takes to “get used to” the course elements [platform, tech features] and/or the method of teaching.
Developing a quality online course is complex due to the fact that technology adds yet another layer to course design and one that requires a unique skill set. In addition there is an interdependent relationship between technology and pedagogy specific to online courses—for instance the features of a LMS platform will determine and shape the course and the teaching methods. Using the discussion forum as an example—the flexibility of the forum feature—how easy it is to set up by the course designer for group assignments, and how it can be used by students for a group assignment whether it can facilitate the communication and collaboration that is required for the assignment will dictate how effectively the ‘method’ is executed in the course.
Online course design requires a breadth of skills that includes technical knowledge, not only familiarity with LMS features, but also outside tools including social media platforms that can enhance student learning. Knowledge of user-focused design, or web design principles is also critical in delivering an intuitive, learning experience for students (How Five Web Design Principles Can Boost Student Learning). Second are the pedagogical methods, in other words how learning is sequenced, framed and presented to students. This array of skills required is far beyond the scope of most faculty, who are experts in their field of study, not necessarily course design. Realistically creating an online course requires at least two or more individuals with specific skills sets working together to develop an engaging, intuitive and quality learning experience.
The onus is on institutions to provide not only professional development for faculty in course design principles and strategies, but to provide support in the technical and pedagogical aspects.
2) BIG Time Commitment Facilitating First Online Course — Levels Off After 2nd Time, But Grading Involves More Time Investment
Though respondents in the survey originally perceived that teaching online took more time than teaching face-to-face, by the third time facilitating respondents reported that it took them about the same amount of time as it did a similar face-to-face course.
There is supporting evidence to the earlier finding that teaching an online course the second and third time becomes about as time-consuming as teaching a face-to-face course the second and third time. The factors that still remain more time-consuming for online teaching compared with face-to-face teaching, even after teaching the course three times, are Instructor-Student Interaction and Grading & Assessment, the two specific factors that can not be prepared in advance for online courses (unlike Content Development and Pre-Semester Setup).
Sixty-nine percent of survey respondents indicated that it took ‘much more’ and ‘more’ time to facilitate an online class for the first time. Yet by the third time, it dropped to 25% in this same categories (table 4 below), which does support the learning curve theory. These findings suggest that acknowledging that more of the instructor’s time will be required the first and even the second time facilitating a course, is important for both the instructor and the institution. Though it does also suggest that professional development is needed for instructors—development focused on facilitation skills that will support skills specific to the uniqueness of online instruction. Such training can potentially reduce the learning curve for instructors, as well as reinforce the building of effective skills, best practices, and efficient use of time.
A startling (and significant) finding of this study is the time dedicated to grading and assessing online students. It appears that the time dedicated to grading students’ work actually increased from the first to third time of facilitating an online course (table 4). Two-thirds of the respondents indicated by the third time it took ‘somewhat more’, ‘more’ or ‘much more’ time to grade and assess students in an online course than face-to-face. I find these results encouraging since an instructor’s feedback of students’ work is a critical component that can motivate students, deepen their knowledge and push them to think critically (Getzlaf et al., 2009). Implications are that skill development in this area are needed and will benefit not only students but can help instructors to provide feedback more efficiently. There are several technology tools and applications that can help instructors achieve efficiency and to make the most of giving feedback using online tools that deliver meaningful, quality feedback for students (Morrison, 2014). Again professional development is needed in the area of grading and assessment to support instructors in their efforts.
By no means is this study the definitive answer on the time requirements for developing and facilitating online courses, but it is an excellent starting point for conversations about ‘time’ needed to create quality online learning experiences.
“Our team realized quickly that we needed to do a better job cross referencing material on our course site. For example if we mention syllabus, we must link to it. Some students we have learned want a great deal of guidance” MOOC instructor, Karen Head (2013)
In the quote above, without realizing it, the instructor was referring to the concept of ‘user experience’. And it’s not guidance students wanted so much as an intuitive learning experience. Creating a user-friendly course site begins with incorporating web design principles. Even the most basic of principles customized to online course design reduces barriers associated with virtual learning by minimizing distractions, highlighting concepts and making resources readily accessible. Embedding a link into the phrase ‘assignment guidelines’ for instance, when the assignment is referred to within a course page, is an example of making resources readily available (if the assignment guidelines are within the syllabus, refer students to the page number). This reduces the amount of time students spend searching and frees up time for learning.
The challenge of designing online courses is not only pedagogical, but also technical, which is the category that ‘usability’ falls under. We are at the point with online learning where pedagogy and technology are interdependent; where a well-designed, user-friendly course with a clear learning path needs to adhere to technical principles as well as pedagogical ones. Technology is a new form of pedagogy. The course site design, how content is presented, is an aspect of online pedagogy. In this post I cover five principles of web design that are essential to online course design.
Retail sites frequently adhere to best practices for web design given customers (users) are more likely to spend time and money on an attractive, intuitive website. I suggest educators use similar web design principles to support their students.
Before we examine the principles, defining user experience (UX) is in order. There are numerous definitions of user experience but the one below specific to web design, incorporates key elements of the entire experience:
“User experience (UX) is about how a person feels about using a system. User experience highlights the experiential, affective, meaningful and valuable aspects of human-computer interaction (HCI) …. It also covers a person’s perceptions of the practical aspects such as utility, ease of use and efficiency of the system. User experience is subjective in nature, because it is about an individual’s performance, feelings and thoughts about the system. User experience is dynamic, because it changes over time as the circumstances change…” All About UX
Five Principles of Web Design Applicable to Online Course Design
1. Design for the user
This seems obvious—design a course from the student’s perspective, yet it’s an atypical approach. When designing a web page for a course site, always ask ‘how will this look to the student’? Anyone involved in online course design needs to take an online course as a student. Completing at least one week of course work in a MOOC for example, gives one an entirely different perspective on course design—guaranteed. Design the course from the student’s perspective—always.
For online course, consistency is probably the most under-utilized principle. Specifically in terms of how resources are titled, labeled and/or placed within the course site. I’ve taken many courses where the same resource, an article for instance, is referred to by two different names—in the syllabus it’s titled one way, and in the course site another. Confusing. Same goes for assignments, calling an assignment by slightly different names, even by one word suggests there are two assignments, not one. Another, posting the same document in two different locations within the site suggests there are two different documents. The time students spend searching, checking, comparing etc. is valuable learning time that is spent on logistics. Consistency is key.
3. White Space
Effective use of white space emphasizes key concepts, improves comprehension (up to 20%) and reduces cognitive overload (Lin, 2004). White space is the part of a web page that is left blank or unmarked. It’s the (white) space between columns, text, images, and margins on the page. This space provides visual relief to the reader and improves readability. Avoid using big blocks of text. Break it up with a graphic, or block of white space or increased line space. See examples below.
In keeping with the idea of white space is simplicity. A cluttered page with three or more colors of font, sizes of font and images placed sporadically throughout that are of different type and size creates a chaotic-looking virtual classroom. It’s far easier to study and focus on learning in a physical classroom that is organized with minimal distractions. The same goes for an online classroom. Keep it simple, two colors of font, same size and style throughout, organized and consistent pages creates a Zen-like classroom where students can focus on course content and application of concepts. Learning is enhanced greatly.
“The way information is organized and presented to students affects not only the usability of information, but the usability of the course itself” (Young, 2014)
5. Use Tabs Effectively
Imagine opening a file drawer that is full of file folders with inaccurate or missing labels. The same principles of file labels apply to web sites except rather than alphabetized listing, it’s an order that makes sense to the student. For example the ‘start here’ tab should be at the top of the menu not third or fourth down the list (which happens more often than you would think). Tabs should be two words (max 3 words), and with descriptive language, ‘Start Here’, ‘Week One, or ‘Student Support’ for example. Use sub-tabs if possible, and if not group tabs into categories (screenshot right). Also avoid CAP LETTERS for titles of tabs. CAPITAL LETTERS can appear loud and abrasive on a website (there are exceptions as the screenshot above right demonstrates).
Developing an online course is a multidimensional process. Usability is one dimension often neglected; understandably so given that most educators approach online course design with little expertise in web design. Yet a little goes a long way—by implementing just the basic web design principles, educators can create an intuitive learning path that gives students the boost they need to invest more time in learning, not searching.
This post examines four MOOCs completed as a student then de-briefed from a course design perspective—I share insights into what worked and what didn’t for the purpose of helping educators create better online learning experiences.
I recently completed two MOOCs on the edX platform that are part of a mini-series on education policy. The courses are great examples of how higher education institutions misuse the MOOC format by using traditional teaching methods that end up falling flat. I debrief the two MOOCs from a course design perspective and share why they were sub par, uninspiring. I also describe two other MOOCs that provided exemplary learning experiences. The two pairs of MOOCs provide instructive examples of contrasting course design approaches.
The second edX MOOC, “Saving Schools: History, Politics and Policy in U.S. Education: Teacher Policy” wrapped up this week (December 4). Both MOOCs followed an identical course structure that included: recorded video lectures that relied on the interview format featuring one (sometimes two) faculty member(s), two assigned readings per week (from the same source), one discussion question each week, and a final exam. This format is typical of xMOOCs; one that tries to mimic the in-class experience.
Click to enlarge. Screen shot of instructions for the final assignment, a digital artifact, in E-learning and Digital Cultures. At the end of this post my Digital Artifact created for the course assignment
The other two MOOCs used a non-traditional design approach. They took advantage of what the MOOC format could offer by acknowledging its uniqueness and providing content from a variety of sources outside the MOOC platform. They also utilized a range of assessment methods, and included social media that encouraged interaction. Both MOOCs, Introduction to Sociology and E-learning and Digital Cultures (from Coursera), inspired and promoted thought. The learner was a viewed as a contributor, not a recipient.
E-learning and Digital Cultures featured YouTube videos not lecture videos to demonstrate course concepts, along with articles, mostly from academic journals. The learning experience closely resembled a cMOOC experience (the original MOOC format developed by Downes and Siemens)—one that leverages sources on the web, shares student blogs and views students as contributors. Introduction to Sociology featured two video formats; one featuring Professor Duneier, not lecturing, but sitting in an armchair (above) talking, sharing course-related experiences. He acknowledged learners (some by name) and encouraged student interactivity. The other was live (and recorded) using Google’s Hangout platform with eight students and Duneier leading a seminar discussion.
Course Design Shortcomings of the edX MOOCs
The purpose of the following discussion about the edX MOOCs is not to criticize the course designers or faculty, but to consider the MOOCs as learning opportunities. Doing so aligns with one of the goals of edX, to use the platform to advance teaching and learning.
Learning/instructional methods: The MOOCs relied upon mostly traditional methods of instruction—lectures, multiple choice assessments. Content was instructor-centered, limited to lectures (featuring faculty member), textbook readings (from a book written by same faculty member), and articles from one source, Education Next, of which the same faculty member is editor-in-chief.
The edX MOOCs would benefit from inclusion of open resources, with links to outside sources showing various perspectives as well as social media platforms where students could engage live with content experts or static content. Also to share content sources, and/or their own content creations (blog posts, etc.)
Learning was confined to a virtually, walled classroom—inside the MOOC platform.
Course Objectives: There were no learning goals outlined for theMOOCs. There didn’t appear to be a focus for each week, or guiding questions to provide structure. Granted, learners should create their own learning objectives when working within a MOOC, though a stated focus or general goals for the course allows learners to establish and shape their own learning goals. E-learning and Digital Cultures provided an overview of the course which outlined the focus for each unit of study, and each week included focus questions to consider.
Rigor: Course rigor was low. Disappointing given the institution behind the MOOC was Harvard. It’s worth noting at edX’s launch in 2012, the Provost of MIT at the time L. Rafael Reif emphasized the rigor and quality of courses on ex’s platform ”(edX courses need) not to be considered MIT Lite or Harvard Lite. It’s the same content” (MIT News). Yet the discussion questions as outlined in my first post, the biased readings, lectures, the application activities for students did not add up to a rigorous learning experience that encouraged critical thinking. Several factors may have contributed. Suffice to say that the course design team would have benefited from someone with a high-level of expertise in effective course design principles, knowledge of learning theories and instructional methods.
Content: As mentioned the majority of the content was limited to the faculty member in the lectures, two or three chapters of a book authored by the same faculty member, and essays from the one source.
Biased resources did not contribute to learner’s considering multiple perspectives. Though in the second MOOC there was an effort by course facilitators to incorporate other perspectives in the discussion forums.
Lecture videos were long — typically 12 to 15 minutes. Research on MOOC videos suggest ideal length is 4 to 6 minutes (Guo, 2013).
Repetitive Content. Content from the readings were also included in the lecture, and frequently two interviews in the same lecture covered the same content.
Delivery methods of content were repetitive, uninspiring.
Content came across as telling, not interactive.
Application activities: There were few activities for learners to engage in except for discussion forums. Unfortunately the questions in the first MOOC did not encourage robust discussion, though they improved in the second course. There were two or three multiple choice questions after each video. Several questions could be considered common knowledge. I could have answered the majority of them without watching the videos.
The pairs of MOOCs illustrate how varied approaches to MOOC course design significantly impacts engagement levels, perceptions and learning outcomes. The edX MOOCs examined here, typical of the majority of MOOCs, relied upon learning methods that failed to leverage the benefits of an open platform, failed to view as students as knowledge sources and contributors. Over time the MOOC format will no doubt settle into something quite different from what we’re experiencing now. A format that will find it’s purpose, engage learners and build bodies of knowledge that benefit all.