Limitations and Lost Nuance: Twitter Does Not Improve Grades

I’ve watched with interest over the last several months as media outlets and individuals have discussed, blogged, and tweeted a study conducted by Junco, Heiberger, and Loken. Their study reported that a group of students who used Twitter as part of a class earned higher grades than classmates in sections of the class that did not use Twitter. It’s a nice study that is clearly described and methodologically sound. Like all studies, it has significant limitations and they are concisely and honestly discussed in the study but those limitations have been ignored by too many people who have made the study into something it’s not.

The study concluded that “Twitter can be used to engage students in ways that are important for their academic and psychosocial development” (p. 10). But is that what has been reported and discussed by others? No, of course not; if it were then I wouldn’t be writing this sanctimonious blog post! Mashable, a very widely-read and influential technology blog, reported on the study using the headline “Twitter Increases Student Engagement [STUDY].” A recently-created infographic proclaims that “Students in classes that use Twitter to increase engagement have been found to average .5 grade points higher than those in normal classes.” Another infographic proclaims that “[Students get] grades up half a gradepoint in classes that use Twitter.”

I get that pithy headlines and concise summaries are necessary to grab attention. But by overlooking or ignoring the details of this study, those headlines and summaries get this all wrong. Let’s return to the original study to understand why.

In the study, the researchers assigned some sections of a class to use Twitter. While the entire class used Ning, these sections also used Twitter to complete some received additional assignments. They also received guidance and encouragement to use Twitter to communicate not only with one another but also with instructors. At the end of the semester, these students had earned higher grades than their non-Twittering classmates.

If I understand the study’s methodology (Rey, please correct me if I got anything wrong!), it seems that this study does not show that “Twitter improves grades.” It shows us that students who do more work and spend more time concentrating on class materials can earn higher grades. It shows us that students who have additional opportunities to communicate and collaborate with one can another earn higher grades. It also shows us that students who have greater access to instructors can earn higher grades. It shows us that Twitter can be a viable medium for students to communicate and coordinate with one another and instructors. And, yes, it shows that Twitter can be an effective educational tool when skillfully incorporated into a class with appropriate support and structure. In a critique of one of the infographics, Junco specifically mentions this: “Yes, that’s our study about Twitter and grades. Unfortunately, what’s missing is that we used Twitter in specific, educationally-relevant ways—in other words, examining what students are doing on the platform is more important than a binary user/nonuser variable.”

This illustrates the challenge with testing the efficacy of educational tools and techniques: It’s really, really hard to isolate just the impact of the tool or technique. To test the tool or technique, you almost always have to make other changes and it’s usually impossible to tell if those changes changed the results of your study more than the tool or technique you intended to study. It’s a limitation of nearly every study focusing on the effect of particular media on education and it may be an inherent limitation for this kind of work. (Richard Clark has been pointing this out for decades; look into his writings for more detailed discussions. He’s also been wonderful in creating dialog with his detractors so there are well-documented and substantive discussions between many different scholars with different opinions.)

Hence my frustration with how this study has been summarized and passed around: By ignoring the limitations and nuance of this study, these summaries miss the boat and draw a grandiose conclusion that the authors of the study never attempt to draw themselves. That’s a shame because this is a nice study that is interesting and informative. But like most research, it’s a small step forward and not a giant, earthshaking leap. Summarizing this study by proclaiming that Twitter is a magic ingredient that can be added to classes to increase grades is irresponsible and misleading.

Update 1: Thanks for the clarification about Ning, Liz!

Update 2: Another example of how headlines can distort or misrepresent research has just popped up. Before correcting the headline, Colorlines reported that the majority of college students are part-time students (full headline before being corrected: “Study: Majority of College Students are Part-Timers, Less Likely to Graduate”) But the actual report doesn’t say that. Instead, it says that “4 of every 10 public college students are able to attend only part-time” (p. 2). It’s a shame that the research was initially being reported incorrectly because the changing demographics of college students is incredibly important and very misunderstood and overlooked. I know there is a lot nuance in discussions of demographics – race, ethnicity, SES status, privilege, etc. – but if we cover up or ignore the details then we haven’t made any progress.

To their credit, Colorlines corrected their headline once I pointed this out to them. They made a mistake in their initial headline and it’s great they they’re willing to correct their public mistake!


Posted

in

,

by

Tags:

Comments

6 responses to “Limitations and Lost Nuance: Twitter Does Not Improve Grades”

  1. Liz Gross Avatar

    The Twitter group wasn’t given “additional assignments.” Those assignments were given to the control group through a Ning network. What was tweeted by instructors was replicated on Ning. The Twitter group had a higher score on the engagement scale, and higher grades.

    That being said, your observation that “It’s really, really hard to isolate just the impact of the tool or technique” is spot-on. It likely can’t be done in a single, one-time study and would take a significant amount of time and money to design and research. That being said…here’s hoping it can be done at some point.

  2. Kevin R. Guidry Avatar

    Thank you very much for the correction! I’m happy to hear it as it makes the study stronger. I’ll correct/update my post accordingly.

    That said, I’m very skeptical that we’ll ever be able to do perfect controlled experiments establishing that particular media or tools are effective across large domains of content, populations, etc. But I don’t think that should stop us from trying! It’s also important to know that tools or media are effective in particular ways and under particular conditions.

    I also have a niggling suspicion that Clark’s argument that “media doesn’t matter” may fall apart when we consider tools and media that enable us to do truly new and innovative things instead of merely replicating old things. I don’t think we’ve found many of those truly new things yet and we haven’t done well using them. but I remain hopeful that we’ll find tools or media that are not only effective but so different that comparing them to old ones is impossible.

    For those reasons and more, we have to continue this work even if it may never provide the magic-bullet solutions that everyone seems to expect.

  3. Rey Junco Avatar

    Good post, Kevin. Liz is right– we made sure that both groups had the same information and same assignments. I am familiar with Clark’s “media doesn’t matter” argument; however, and as you’ve suggested, there is clearly some proportion of the variance in outcomes that has to be attributable to the technology. For instance, some of what we did on Twitter can’t be done via email. As researchers, it’s important to continue to try to parse out what proportion of the variance is attributable to the technology and what proportion is attributable to how it is used. As my latest research suggest, how a technology is used seems to explain much more of the variance than overall time spent on the technology.

  4. Gary Honickel Avatar

    I agree with the frustration of how articles are summarized and taken as gold. As a new professional, recent Master’s graduate, I often find that people stick to one theory and that’s everyone. I think your post really helps put out there the elephant in the room, at least from my perspective. I often want to take the first positive result of social media reports and show everyone because I feel I need to justify what I am doing. This reminded me to step back and take a deep breath and take each research as it is and think of it more as a step in the right direction.

    Overall, I appreciated the post Kevin and I think what I like alot about the post is you update as you go along to ensure you also have the most accurate information.

  5. Kevin R. Guidry Avatar

    Thanks Rey. I agree that there are many factors that are probably more important than the specific technology being used, including how it’s being used and how well it’s being integrated into the curriculum. That seems like a really obvious observation but it’s amazing how often people seem to think that technology is like a magic spice that you can just sprinkle onto a course or into a school and – MAGIC! – things become better.

  6. Kevin R. Guidry Avatar

    Thanks Gary. I think it’s important to be as transparent and honest as possible and making sure my posts are accurate seems to be a part of that for me.

Leave a Reply

Your email address will not be published. Required fields are marked *