I’ve now finished the Coursera Human Computer Interaction Course. As I await my final grade, I reflect on my experience, on the statistics on student numbers, and on how the platform can develop.
It’s been a wonderfully wide-ranging course, spanning the whole design process. Participants have learned needfinding and observation techniques, how to carry out rapid prototyping, principles for effective interface and visual design, and a repertoire of strategies for evaluating interfaces. The scope has been a real delight and Scott Klemmer’s lectures have been brilliant.
Today Scott Klemmer, in his concluding message to class participants, shared some statistics on how people have been engaging with the course:
- 29,105 students watched video(s)
- 6,853 submitted quiz(zes)
- 2,470 completed an assignment
- 791 completed all 5 assignments
(Assuming that anyone who submitted a quiz or assignment also watched videos:)
Of the number of people who watched video(s):
23.55% submitted one or more quizzes
8.48% submitted one or more assignments
2.72% completed all 5 assignments (and presumably all quizzes as well, as the final course grade for people doing assignments is also determined in part by their quiz score.)
The course offered two different levels of accomplishment – you could either watch the videos and do the weekly review quiz (short multiple choice exercises), or you could do this and work on a design project. The former was the ‘basic’ track; the latter was the ‘studio’ track. I chose the latter, as I wanted to learn as much as I could. We now know that 2.72% of those who watched one or more lectures took the time-intensive design assignments through to completion.
I admire the way that the course tried to accommodate for people who could only commit to following the lectures, whilst allowing those with the time and inclination to get stuck in to a full project.
The quizzes were quite easy and quick to complete. I would be interested to find out how many people watched and submitted all videos and quizzes – in other words, how many people followed the ‘basic’ track through to completion. Was the provision of this less time-hungry option a success? What was the attrition rate over the course of the class for people on the ‘basic’ track?
Whilst Scott’s videos were excellent, the assignments were the highlight of the course. The assignments allowed participants to put into practice what was discussed in the lectures and tested in the quizzes each week. They were very time consuming (I spent in the region of 15-20 hours a week on mine) but absolutely worthwhile. I liked the way that each of us worked on a project for the duration of the course, going through the entire design process and coming out with a completed prototype.
Online peer assessment was essential in making the assignments work – it would have been impractical to grade them in any other way, given the scale of the course (not to mention the fact that it was not paid for by participants). And being involved in peer assessment enhanced the learning experience. (The pedagogy section of the Coursera website outlines the literature on peer assessment.) As I noted in a comment on a blog on peer grading in online classes, it was highly educative to see other people’s assignments, as this has been such a creative course.
The assessment rubrics were mostly very clear, and they were improved a few weeks into the course when ‘in-between’ marks were introduced. This allowed markers to accommodate for people who were better than one standard, but who had not yet reached the next level of accomplishment. eg:
No performance – 0
Poor performance – 1
Basic performance – 2
Good performance – 3
No performance – 0
Poor performance – 1
Basic performance – 3
Good performance – 5
There were many other tweaks to the platform over the course, as we all got to grips with this experiment.
This course was a first foray into mass online peer assessment. Whilst other online courses have used computer grading to deliver scalable assessment, this wasn’t suitable here. Computer science and mathematics lend themselves to more mechanistic grading processes. One can quite easily devise a method to test whether a given program processes input correctly. But assessing design requires a more holistic, subjective and qualitative evaluation method. So the course drew upon the grading rubric used in Scott’s Stanford class and utilised peer evaulation. Scott confesses that “we had no idea how this would work online.”
The peer grading process was certainly not anarchic. Before being let loose on your peers, you had to evaluate some control pieces of work. You’d then see how close you were to Scott Klemmer’s own assessment of that work. Once you got good enough (I think you were deemed to be ready when you’d performed 3 good evaluations) you were set to grade 5 of your peers. This trial grading system felt quite effective.
By grading other people’s work before you went back to grade your own, you were encouraged to take a more humble view, and hopefully one that was more objective. But Scott did note that “there was a real variation in the effort, standards and interpretation or rubric.” I wonder if the quality of a user’s evaluations could be monitored by comparing them to the mean grade given to a given piece of work by other evaluators. In reflecting on the assessment process, Scott observed that we need to figure out how to give people richer qualitative feedback. I think that these issues could be addressed hand-in-hand with measures to improve the timings of the course (and reduce attrition and dropouts) and the discussion element of the course.
I do think that the weekly workload could be improved. For those doing the studio track, the workload was quite punishing – particularly for those with work and/or family commitments. I spent around 20-25 hours a week on the course.
In any given week in the HCI course, learners had to carry out peer assessment and work on their own assignment (in addition to viewing the lectures and doing the week’s quiz). This meant that assessment and assignment competed with each other for attention. And there was no real formal incentive to lavish attention on the peer assessment. In his concluding remarks, Scott Klemmer stated that in future there would be more time between assignments.
I would also want to see a way of rewarding good evaluators. Good feedback could be incentivised. Perhaps each participant could state which feedback they found most helpful after each round of assessment, and the person who provided it could receive a little extra credit.
I think the course experience could be improved if there were alternating weeks of creation and assessment. This could encourage deeper reflection – and could be used to drive reflective peer discussion. This would also target criticism that Coursera’s Massive Open Online Courses (MOOCs) have not been sufficiently discursive, and that they are still more like one-way lectures. The creation, sharing and discussion of project work in this course has, of course, already undermined this criticism.
Scott was impressed by the way that the community drove much of the learning, with people sharing interesting interfaces, articles and other resources. (For example, some participants collated a reading list, drawing upon all the resources mentioned in the lectures; many set up study groups; others performed extra peer assessments voluntarily, and many more articulated or answered people’s questions and concerns in the forums.)
Doing more to foster the communal learning aspect of the course in a more targeted and deliberate fashion would further enhance the experience.
The clarity of assignments is ripe for improvement. Assignment requirements were not always initially clear, and the overall development of the course and destination of the design project was not clear at the start. (Maybe this helped avoid putting some people off by concealing the workload!) Over the course, these details have been hammered out, ensuring that next time through there will be more clarity in the assignment wording, with explanatory examples, and a clear roadmap of project work and deadlines.
Whilst sometimes the confusion was frustrating, it never dampened my excitement at being in the first cohort to try out this course. It felt like everyone involved in the process was learning, including the teachers, so I didn’t mind the rough edges (particularly as the course was so good, and completely free). That was pretty cool.
This has been a fantastic course, and I’m still in awe of the fact that it was available for free. I’ll finish by mirroring Scott’s observation: “seeing the online education space really blossom gives me a lot of hope for the future.” I’m excited by the online educational space that has been emerging in force over the last year, and am already plotting my next courses. I’m half-way through Power Searching with Google, and have signed up for Udacity’s CS101.
I’d like to extend my sincerest thanks to Scott Klemmer and the team at Stanford and Coursera who made this course happen. I’ll do my best to use what I’ve learned, to continue improving, and to help others do the same.