I’ve watched with interest over the last several months as media outlets and individuals have discussed, blogged, and tweeted a study conducted by Junco, Heiberger, and Loken. Their study reported that a group of students who used Twitter as part of a class earned higher grades than classmates in sections of the class that did not use Twitter. It’s a nice study that is clearly described and methodologically sound. Like all studies, it has significant limitations and they are concisely and honestly discussed in the study but those limitations have been ignored by too many people who have made the study into something it’s not.
The study concluded that “Twitter can be used to engage students in ways that are important for their academic and psychosocial development” (p. 10). But is that what has been reported and discussed by others? No, of course not; if it were then I wouldn’t be writing this sanctimonious blog post! Mashable, a very widely-read and influential technology blog, reported on the study using the headline “Twitter Increases Student Engagement [STUDY].” A recently-created infographic proclaims that “Students in classes that use Twitter to increase engagement have been found to average .5 grade points higher than those in normal classes.” Another infographic proclaims that “[Students get] grades up half a gradepoint in classes that use Twitter.”
I get that pithy headlines and concise summaries are necessary to grab attention. But by overlooking or ignoring the details of this study, those headlines and summaries get this all wrong. Let’s return to the original study to understand why.
In the study, the researchers assigned some sections of a class to use Twitter. While the entire class used Ning, these sections also used Twitter to complete some
received additional assignments. They also received guidance and encouragement to use Twitter to communicate not only with one another but also with instructors. At the end of the semester, these students had earned higher grades than their non-Twittering classmates.
If I understand the study’s methodology (Rey, please correct me if I got anything wrong!), it seems that this study does not show that “Twitter improves grades.” It shows us that students who do more work and spend more time concentrating on class materials can earn higher grades. It shows us that students who have additional opportunities to communicate and collaborate with one can another earn higher grades. It also shows us that students who have greater access to instructors can earn higher grades. It shows us that Twitter can be a viable medium for students to communicate and coordinate with one another and instructors. And, yes, it shows that Twitter can be an effective educational tool when skillfully incorporated into a class with appropriate support and structure. In a critique of one of the infographics, Junco specifically mentions this: “Yes, that’s our study about Twitter and grades. Unfortunately, what’s missing is that we used Twitter in specific, educationally-relevant ways—in other words, examining what students are doing on the platform is more important than a binary user/nonuser variable.”
This illustrates the challenge with testing the efficacy of educational tools and techniques: It’s really, really hard to isolate just the impact of the tool or technique. To test the tool or technique, you almost always have to make other changes and it’s usually impossible to tell if those changes changed the results of your study more than the tool or technique you intended to study. It’s a limitation of nearly every study focusing on the effect of particular media on education and it may be an inherent limitation for this kind of work. (Richard Clark has been pointing this out for decades; look into his writings for more detailed discussions. He’s also been wonderful in creating dialog with his detractors so there are well-documented and substantive discussions between many different scholars with different opinions.)
Hence my frustration with how this study has been summarized and passed around: By ignoring the limitations and nuance of this study, these summaries miss the boat and draw a grandiose conclusion that the authors of the study never attempt to draw themselves. That’s a shame because this is a nice study that is interesting and informative. But like most research, it’s a small step forward and not a giant, earthshaking leap. Summarizing this study by proclaiming that Twitter is a magic ingredient that can be added to classes to increase grades is irresponsible and misleading.
Update 1: Thanks for the clarification about Ning, Liz!
Update 2: Another example of how headlines can distort or misrepresent research has just popped up. Before correcting the headline, Colorlines reported that the majority of college students are part-time students (full headline before being corrected: “Study: Majority of College Students are Part-Timers, Less Likely to Graduate”) But the actual report doesn’t say that. Instead, it says that “4 of every 10 public college students are able to attend only part-time” (p. 2). It’s a shame that the research was initially being reported incorrectly because the changing demographics of college students is incredibly important and very misunderstood and overlooked. I know there is a lot nuance in discussions of demographics – race, ethnicity, SES status, privilege, etc. – but if we cover up or ignore the details then we haven’t made any progress.
To their credit, Colorlines corrected their headline once I pointed this out to them. They made a mistake in their initial headline and it’s great they they’re willing to correct their public mistake!