Facebook and Grades: A More Critical Perspective

A real Facebook

Discussion about the possible relationship between college students’ use of Facebook and lower grades continued this week with the publication of a First Monday article addressing this topic.  This article follows up on previous discussions that followed the widespread publicity surrounding a poster session presented at AERA that found a correlation between Facebook usage and lower grades. Unfortunately, I’m not sure that the research and related discussions have shed more light on this topic.  But it sure has been exciting to watch how quickly it’s all happened!

The discussions have followed two general threads: (a) the AERA research was poorly done and (b) the media got the story wrong. I’ll address the first thread in detail below.  The second thread has been relatively short-lived as there isn’t any real disagreement that many reporters and editors leaped (without looking, thinking, or corroberating) from “there appears to be a link between Facebook usage and low grades in this small sample of this very limited study” to “Facebook causes bad grades!!!”  That’s irresponsible and everyone agrees on that point.  There is also a third thread that focuses on “I can’t believe this is true but I don’t have any evidence!” but it’s not worth wasting any time on those ill-informed opinions.

In general, most of the current research into Facebook usage seems to lack sophistication (and much lacks rigor; how many of the articles based on surveys discuss or even hint at validity or reliability?). The researchers behind the poster session and this First Monday article both rightly acknowledge that they are discussing correlation but there is a whole lot going on that they don’t acknowledge or can’t account for with their selected (or mandated) methodologies and data.  In trying to understand college students, we go to great lengths at the shop where I work to isolate and separate the influence of different variables and we struggle with this mightily.  In many instances, we have to employ relatively-sophisticated analyses such as multilevel modeling to adequately control for different variables, particularly the institution-level and student-level variables.  In fact, I don’t recall seeing any mention of institution-level influences in any of the currently-available research even beyond this poster session and article (of course, one can’t do anything about this if one’s sample is only drawn from a handful of institutions, another significant limitation of nearly all Facebook research). I acknowledge that institution-level influences only account for a small proportion of the variance among most of the things we measure but omitting measurement and discussion of institutional characteristiscs altogether seems to indicate a lack of theoretical and methodological sophistication. To put it bluntly, this is the kind of thing that many non-higher education researchers often miss as it simply isn’t their area of expertise and why higher ed scholars desparately need to be actively contributing to this conversation.

What most people want to see is not correlation but causation.  In other words, we want to be able to say that (the use of ) Facebook causes lower grades.  That’s a damn hard claim to make.  Even under the best circumstances, establishing causation is fiendishly difficult.  It would require sophisticated measures and analyses. Given the previously-mentioned lack of sophistication in most of these studies I don’t know that these researchers collected the right kinds of data to even begin to do the work necessary to establish causation.  Frankly, I think it’s so complicated and the analysis would be so fragile and fraught with assumptions and caveats that it’s a fool’s errand.

Let me illustrate this with an example drawn from the work done by folks with whom I work.  We know, from several years of repeated data collection and analysis by different researchers, that more frequent use of technology is strongly associated with higher levels of student engagement.* But even with all of the data we have collected, the rigor of our data collection methods, and the sophistication of our analyses, we haven’t yet figured out what exactly causes these measures to be correlated.  In other words, although we know that students who frequently use technology do better in many different ways we don’t know why that happens.  There are many different possibilities but even after 10 years of poking at this we don’t have any explanations upon which we can hang our hat and say, “That’s it – that’s why!”

It’s interesting and instructive to read not only the First Monday article, the response from the AERA poster session author, and the response from the FM authors.  I am hopeful that we will see more sophisticated and better planned research and I am more hopeful that this will occur if those who are most knowledgeable of college students and American higher education continue working and contributing to this discussion.

* In the context of this discussion I must emphasize that although we do ask students about their grades our focus is almost always much wider than just that one measure; in fact, we see broadening discussions of educational quality beyond simple measures such as grades or rankings as one of our primary missions.  I also add that we typically don’t specifically ask in any of our surveys about SNS use.  We do have a set of experimental questions out right now that asks about this but if I recall correctly the question is limited to communication about academic issues as we’re exploring how students and faculty communicate and collaborate.  Our colleagues at UCLA have explored this general issue, however, and it’s worth looking at their work if you haven’t already done so.