Data Analysis MOOC Week 2: Muddling Through Frustration

I have watched the online videos and successfully completed the quiz for week 2 of the data analysis MOOC in which I am enrolled. I struggled quite a bit with some of the R syntax and that made the quiz a very frustrating experience. I have two observations to share about what I learned this week about the format of the course.

First, I am disappointed that so far the only opportunities for students to practice what is being taught and receive feedback is the weekly quiz.  I was able to muddle through things enough to get answers that matched the response options for this week’s multiple-choice quiz but despite answering all questions correctly I’m still very unsure of much of the content – I just know that I happened to somehow end up with answers that matched some of the ones included in the quiz.  Some of this is simply due to my lack of experience with R and its high learning curve.  But much of it is due to the fact that the multiple-choice quiz was the only opportunity to practice with any semblance of feedback and that feedback was restricted to an anemic “correct” or “incorrect” for each question with no additional feedback.

Yes, I can practice on my own some of the skills taught in this class.  This is certainly the case if I want to focus solely on learning how to use R – syntax, configuration, functionality, etc. – as the language provides immediate feedback with error messages or output.  But if that is the focus and if that’s sufficient to learn the skills then why do we need an organized course instead just a course packet or list of recommended self-guided topics and exercises?

What distinguishes an organized, well-taught class from a self-taught topic is that a class has an expert who not only make their thinking explicit but also offers targeted feedback for students as they practice the skills they are learning.  It’s conceivable that some skills could be taught using sophisticated, automated tools if we have a deep enough understanding of how people typically learn those skills that we can programmatically recognize mistakes and misunderstandings to provide appropriate, specific feedback.  Sometimes, this can be done to a (very) limited degree with appropriately designed multiple-choice instruments where the incorrect responses are designed to be diagnostic i.e., wrong answers aren’t merely incorrect but they’re designed to identify particular kinds of mistakes or misunderstandings.  That seems to be the case for some of the questions and answers in this MOOC but we’re not provided with any of the related feedback to help us understand what common mistake we may have made, how we might be misunderstanding the issue, and how we can work to correct our thinking.

Second, the size of the course requires innovative ways to provide support for students and this course seems to rely heavily on the course discussion board.  This is an observation, not a criticism. I’m quite comfortable using that medium as I’ve been using online discussion boards since the early 1990s when they were one of the primary draws for dial-up bulletin board systems (with the other major draw being online “door” games).  I don’t know how well this works for other students, however, as I don’t want to make assumptions about their experiences, skills, and cultures.  It’s probably not a big deal; my concern here is very minor and more of a curiosity about how other students experience and use (or don’t use) the discussion board. (In other situations I would be concerned about those who have poor or no Internet access or those who have little comfort and experience with the Internet but it’s reasonable to expect students who enroll in an online course to have sufficient Internet access and skills. I’m not suggesting that everyone has the access and skills to enroll in an online course, merely that those who are already enrolled in one presumably have the required access and skills.)