Limitations and Lost Nuance: Twitter Does Not Improve Grades

I’ve watched with interest over the last several months as media outlets and individuals have discussed, blogged, and tweeted a study conducted by Junco, Heiberger, and Loken. Their study reported that a group of students who used Twitter as part of a class earned higher grades than classmates in sections of the class that did not use Twitter. It’s a nice study that is clearly described and methodologically sound. Like all studies, it has significant limitations and they are concisely and honestly discussed in the study but those limitations have been ignored by too many people who have made the study into something it’s not.

The study concluded that “Twitter can be used to engage students in ways that are important for their academic and psychosocial development” (p. 10). But is that what has been reported and discussed by others? No, of course not; if it were then I wouldn’t be writing this sanctimonious blog post! Mashable, a very widely-read and influential technology blog, reported on the study using the headline “Twitter Increases Student Engagement [STUDY].” A recently-created infographic proclaims that “Students in classes that use Twitter to increase engagement have been found to average .5 grade points higher than those in normal classes.” Another infographic proclaims that “[Students get] grades up half a gradepoint in classes that use Twitter.”

I get that pithy headlines and concise summaries are necessary to grab attention. But by overlooking or ignoring the details of this study, those headlines and summaries get this all wrong. Let’s return to the original study to understand why.

In the study, the researchers assigned some sections of a class to use Twitter. While the entire class used Ning, these sections also used Twitter to complete some received additional assignments. They also received guidance and encouragement to use Twitter to communicate not only with one another but also with instructors. At the end of the semester, these students had earned higher grades than their non-Twittering classmates.

If I understand the study’s methodology (Rey, please correct me if I got anything wrong!), it seems that this study does not show that “Twitter improves grades.” It shows us that students who do more work and spend more time concentrating on class materials can earn higher grades. It shows us that students who have additional opportunities to communicate and collaborate with one can another earn higher grades. It also shows us that students who have greater access to instructors can earn higher grades. It shows us that Twitter can be a viable medium for students to communicate and coordinate with one another and instructors. And, yes, it shows that Twitter can be an effective educational tool when skillfully incorporated into a class with appropriate support and structure. In a critique of one of the infographics, Junco specifically mentions this: “Yes, that’s our study about Twitter and grades. Unfortunately, what’s missing is that we used Twitter in specific, educationally-relevant ways—in other words, examining what students are doing on the platform is more important than a binary user/nonuser variable.”

This illustrates the challenge with testing the efficacy of educational tools and techniques: It’s really, really hard to isolate just the impact of the tool or technique. To test the tool or technique, you almost always have to make other changes and it’s usually impossible to tell if those changes changed the results of your study more than the tool or technique you intended to study. It’s a limitation of nearly every study focusing on the effect of particular media on education and it may be an inherent limitation for this kind of work. (Richard Clark has been pointing this out for decades; look into his writings for more detailed discussions. He’s also been wonderful in creating dialog with his detractors so there are well-documented and substantive discussions between many different scholars with different opinions.)

Hence my frustration with how this study has been summarized and passed around: By ignoring the limitations and nuance of this study, these summaries miss the boat and draw a grandiose conclusion that the authors of the study never attempt to draw themselves. That’s a shame because this is a nice study that is interesting and informative. But like most research, it’s a small step forward and not a giant, earthshaking leap. Summarizing this study by proclaiming that Twitter is a magic ingredient that can be added to classes to increase grades is irresponsible and misleading.

Update 1: Thanks for the clarification about Ning, Liz!

Update 2: Another example of how headlines can distort or misrepresent research has just popped up. Before correcting the headline, Colorlines reported that the majority of college students are part-time students (full headline before being corrected: “Study: Majority of College Students are Part-Timers, Less Likely to Graduate”) But the actual report doesn’t say that. Instead, it says that “4 of every 10 public college students are able to attend only part-time” (p. 2). It’s a shame that the research was initially being reported incorrectly because the changing demographics of college students is incredibly important and very misunderstood and overlooked. I know there is a lot nuance in discussions of demographics – race, ethnicity, SES status, privilege, etc. – but if we cover up or ignore the details then we haven’t made any progress.

To their credit, Colorlines corrected their headline once I pointed this out to them. They made a mistake in their initial headline and it’s great they they’re willing to correct their public mistake!

Quick Update: NSSE/EDUCAUSE Partnership

(I’m working on a longer post but I keep getting interrupted by life so this short post will have to do for now.)

I’m super excited that I’m going to the 2011 EDUCAUSE Annual Conference next month in Philadelphia to work with EDUCAUSE staff and members to develop potential questions for the next version of NSSE! I’ve always been a huge fan of EDUCAUSE and the work they do so I’m very hopeful that this collaboration will be fruitful and help us figure out the right kinds of questions to ask about technology. Over the past four years I’ve been involved in several efforts to address technology in NSSE and it’s very difficult so I’m really excited that we’ll be able to tap into the experience and expertise of technology experts.

I’m also a bit trepidatious about this collaboration. It’s young and in many ways undefined. I am hopeful that it bears fruit but it may fizzle out or even backfire since there is so much ground we have yet to cover and these are two large, complex organizations. Like many such efforts, it also feels like it is very dependent on a small number of people. While we’re all very talented and dedicated, we’re also incredibly busy and it may turn out that our interests are incompatible.

I’m also very thankful that this collaboration has even made it this far. It’s very gratifying that my colleagues are still willing to take risks on public ventures like this even as we continue to experience sharp public criticism. It’s more incredible for me to know that my supervisors have been supportive of this effort even though it has largely been championed by one graduate student. Of course, I haven’t done this or anything here by myself; I’ve had wonderful support from many people in nearly everything I’ve done here, especially from my current supervisor Allison BrckaLorenz who has been an enthusiastic supporter and wonderfully capable advisor from day one. Despite all of her other important responsibilities, Allison is neck deep in this EDUCAUSE/technology-thing with me and I’m so happy that she is involved!

So even though I’m a little fearful that this particular effort could fizzle out or even publicly blow up (which seems extraordinarily unlikely but I’m always a bit paranoid), I go into this knowing I’m not alone and I’m working with and for people as supportive as they are brilliant. I really want this collaboration between two of my favorite organizations to work. If this all works out well – and it will be a couple of years before we really know – it could be very powerful in helping U.S. higher education better understand and use technology to teach and communicate with undergraduates. I know that’s a very lofty aspiration but these two organizations are more than capable of fulfilling it.

More #sachat analysis: One Illuminating Figure

Laura Pasquini and I are working on analyzing #sachat data, a follow-up to work I’ve done previously but did not formally publish. Part of our work involves looking at a few other student affairs-related hashtags to help us understand #sachat in context. This figure shows the number of Twitter messages posted with particular hashtags – #highered, #sachat, #sadoc, #sagrad, #sajobs, and #studentaffairs – during the week of June 27, 2011. The #sachat session really stands out here both in the number of messages posted and in how it interrupts an otherwise regular daily and weekly pattern. This isn’t a profound discovery but it’s an easy way of illustrating that #sachat sessions are relatively unique and prominent uses of Twitter among some users.

Visualizing #sachat Data (First Draft)

It took me much longer than I had hoped but today I finally finished a first draft of a visualization of some of the #sachat data I’ve been working with:

The method of analysis used in this video is dynamic topic analysis (DTA). DTA was developed by Dr. Susan Herring and more information about the method can be found in one of her papers published in 2003. This method of visualizing DTA data was created by Andrew Kurtz and Dr. Susan Herring. Andrew’s original Java tool isn’t working for me any more so I developed the graphs in this video using Excel macros to generate R scripts (which has been an adventure because this is the first time I’ve used R).

The music in this video is Tutto L’Amor Perduto by Giorgio Costantini. It is available under a Creative Common license at BeatPick.com.

I created this and publicly posted it for several reasons. First, this really is a first draft and I would love feedback so I can improve it. I’ve already noticed a few small mistakes that will be corrected in the next version. I’ve also received some feedback and suggestions for possible improvements. If you have some, please let me know!

Second, I hope that some in the #sachat community find even this very rough first draft interesting, informative, and possibly even useful. Even though I’m comfortable studying a group that is so very public with their actions and membership, I still believe that I should give back to that community in ways that are appropriate and helpful. It just seems like a nice thing to do and it’s a small way of showing my appreciation to them.

Finally, I’m interested in seeing if there is interest in helping me continue this kind of work. One of the reasons why this is only a rough draft is that I’m the only one who has analyzed these data. DTA is a specialized form of content analysis and, like any content analysis, it should be performed by multiple persons to ensure the codes are being applied consistently (which is why good content analysis studies report interrater reliability figures to help bolster the credibility of the findings). This analysis – and it should hold up well even when other coders are added – shows that this particular use of Twitter is moderator-led discussion with coherent threads of discussion. I need to analyze a few other #sachat sessions to ensure this is consistent for other sessions. I also need to analyze some other Twitter data so I have some useful points of comparison.

I think this use of Twitter is fairly unusual and it would be great to be able to publicly discuss that with confidence. This is a wonderful example of a group of people using a very limited tool to do very good things that transcend (my) expectations and it should be represented in the research literature.

CFP for Articles About Technology and Greek Life

The editors of Oracle: The Research Journal of the Association of Fraternity and Sorority Advisors are putting together an issue dedicated to “empirical research on technology.” Examples of such research may include:

  • Technology’s effect on fraternity/sorority recruitment
  • Studies regarding the ways alumni(ae) connect online
  • Relationship of technology use and fraternity/sorority involvement
  • Impact of email/twitter/facebook and other social networks for Greek organizations

Kim Nehls, Executive Director of ASHE and Visiting Assistant Professor at UNLV, is the guest editor of this issue. Please contact her at kim.nehls@unlv.edu if you’d like to contribute to this issue or have questions.

Coverage and Prominence of U.S. College and University Wikipedia Articles

A colleague and I are presenting a paper at ASHE in a few months discussing the content of Wikipedia college and university articles.  The most common comment the reviewers made of our paper proposal was that we did not quite answer the “So what?” question.  In other words, we didn’t quite convince them that our topic is important and interesting.  Part of the answer lies in convincing you that U.S. college and university Wikipedia articles are (a) very common and (b) very popular.

First, let’s see how common U.S. college and university Wikipedia articles are.  To do this, I need to figure out how many institutions have a Wikipedia article.  I randomly selected 10% (732 units) of the 2008 IPEDS universe, a listing of every Title-IV-participating institution (e.g. virtually every accredited institution in the United States and its territories).  I then checked to see if these units have Wikipedia articles.  Broken down by sector and control and ignoring the handful of system offices and unclassified institutions pulled into the sample, here is what I found:

Table 1: Coverage of Wikipedia Articles
Less than 2-year 2-year 4-year All
Public 20.69% 87.16% 100.00% 82.04%
Private not-for-profit 9.09% 31.25% 91.28% 81.91%
Private for-profit 13.75% 40.21% 85.96% 35.03%
All 14.50% 62.61% 92.26% 61.47%

Considering that most people in the U.S. think of 4-year institutions when they think of “college” or “university,” Table 1 shows us that it’s fair to say that college and university Wikipedia articles are very common.  Not only are they ubiquitous for public 4-year institutions, they’re very common for private 4-year institutions and community colleges.  The primary types of institutions for which they are uncommon are private 2-year institutions and all types of less than 2-year institutions, institutions typically associated with specialized technical training and usually omitted when talking about colleges and universities.

Next, we need to figure out the popularity of U.S. college and university Wikipedia articles.  In this context, I am defining “popular” by examining where the top three search engines – Google, Yahoo!, and Bing – place U.S. college and university Wikipedia articles.  To do this, I selected a random sample of these Wikipedia articles; the sample is also stratified, including 12 articles from each major quality classification assigned by the Wikiproject Universities (Featured, Good/A, B, C, Start, and Stub).

Table 2: Search Engine Placement
Google Yahoo! Bing
Average placement 6.9 2.3 2.3
Percentage first unofficial link 79% 96% 96%

As shown in Table 2, when you search for these institutions in each of the three leading search engines, Wikipedia articles are not only among the very first results but they’re usually the first result that isn’t controlled by the institution.  Google seemed to struggle with providing accurate results for the institutions who do not have unique names (i.e. Southwestern College, Sierra College), listing several other similarly-named institutions above the Wikipedia article.  Yahoo! and Bing did not have this problem, almost always listing the Wikipedia article immediately after the institution’s official website or immediately after the institution’s official website and the official athletics website (of course, Yahoo! and Bing provided the same results since they use the same search technology).

Based on a random sample of the accredited colleges and universities in the United States, Wikipedia has articles for the majority of institutions.  This is particularly true when considering 2- and 4-year institutions, especially public ones.  Further, those Wikipedia articles are placed very highly in search results, usually immediately proceeding the institution’s official website.  Not only are U.S. college and university Wikipedia articles very common, they’re extremely popular.

(The data are available here:

A few of the spreadsheets are rather large for Google spreadsheets so they’re a bit sluggish.  Sorry!)

Framework for Understanding Historical View of Housing Technology

(This is largely a note for myself.  I had an epiphany while showering this morning and I don’t want to forget it!)

I haven’t touched it for a while but for a few years I’ve been working on historical research focused on entertainment and communications technologies in American college and university residence halls. As is often the case, I began this research as it was a topic of interest to me; I placed only superficial thought on practical applications and implications. In other words, I did it only because I liked it and it interested me. But that won’t convince others to care about this research, to listen to me discuss it, or allow me to publish it.

This morning I finally found my hook. This will be the first time I’ve written it down so let’s see how it looks in print:

Understanding the history of entertainment and communications technologies in residence halls provides us with a means for understanding the tapestry of forces that have shaped not only residence halls but academia in the United States. These technologies provide rich examples of innovations motivated by economic competitiveness, cultural expectations, and academic experimentation.

Not only does this provide me with a much-needed organizational framework for this work but it also provides others with a motivation for understanding and supporting this historical research.

Student and Faculty Use of Technology

(This is a very brief summary of a paper a colleague and I presented on Monday, March 31, at the AIR Forum in Chicago, IL.  Both the paper and the presentation are available on the NSSE website; please consult the paper or contact us for more detailed information.  We hope to further develop this paper and submit it for publication very soon so your comments and questions are very much appreciated!)

In the paper Allison BrckaLorenz and I presented earlier this week, we used data collected with the 2009 administrations of the National Survey of Student Engagement (NSSE) and Faculty Survey of Student Engagement (FSSE) to examine how often these two populations – students and faculty – use academic technologies.  We added several questions about technology to the surveys administered to some institutions.  In this paper, we examined the responses to those additional questions from senior undergraduate students and faculty who teach them at 18 institutions who participated in both NSSE and FSSE.  Specifically, we (a) compared student and faculty responses and (b) explored responses across academic disciplines.  However, to keep this blog post a manageable and readable length, I will omit most of the discussion of disciplinary differences; I encourage you to read the full paper if you are interested in those findings.

The survey question on which we focused was multi-part and asked respondents how frequently (Very often, Often, Sometimes, or Never) they used some academic technologies:

  1. Course management systems (WebCT, Blackboard, Desire2Learn, Sakai, etc.)
  2. Student response systems (clickers, wireless learning calculator systems, etc.)
  3. Online portfolios
  4. Blogs
  5. Collaborative editing software (Wikis, Google Docs, etc.)
  6. Online student video projects (using YouTube, Google Video, etc.)
  7. Video games, simulations, or virtual worlds (Ayiti, EleMental, Second Life, Civilization, etc.)
  8. Online survey tools (SurveyMonkey, Zoomerang, etc.)
  9. Videoconferencing or Internet phone chat (Skype, TeamSpeak, etc.)
  10. Plagiarism detection tools (Turnitin, DOC Cop, etc.)

The average responses to this question are shown in the figure below.

A few things are apparent from this figure and the responses that it displays.  First, the only technology that students and faculty really use is course management systems (CMSs); most respondents never used the other technologies.  Second, except for Plagiarism detection tools, students are reporting more frequent use of these technologies than faculty.  This is particularly noticeable for collaborative editing software, a technology that students probably use outside of class to collaborate much more often than they use it during class or when specifically assigned to use it.

Another way to make sense of these survey responses is to use cluster analysis to group respondents together.  For students, a 4-means cluster analysis made the most sense:

The students in the High and Medium Use clusters used multiple technologies with relatively high or medium frequency (remember that most students never used most of these technologies).  Students in the Low Use cluster only used CMSs.  And students in the No Use category didn’t really use any of these academic technologies.

A 3-means cluster analysis was most appropriate for the faculty respondents:

As with the student clusters, faculty in the High Use cluster used multiple technologies with some frequency.  Faculty in the Low Use cluster only used CMSs and faculty in the No Use category didn’t really use any of these academic technologies.

From these figures, it is clear that most students and faculty are making little use of academic technologies except for course management systems like Blackboard and Sakai.  Given the resources campus have invested in these particular technologies, it is probably good that faculty and students are making frequent use of them.  However, we stop short of making a subjective judgment based on these responses as there are certainly many instances in which technology is neither helpful nor appropriate in classwork and assignments.  A better approach – one that may be impossible using self-administered surveys – would be to understand not just how often students and faculty use technologies but how often they use them appropriately and well.

More interestingly, students are reporting a higher usage of these academic technologies than faculty.  Most likely, these technologies are not be required by faculty but used by students on their own initiative to complete their work and collaborate and communicate with one another.  The differences between student and faculty responses might be a result of the methodology of this study.  But these differences are probably real and point to genuine differences in how frequently students and faculty use these and other technologies, differences that may result in tension and other differences between these two groups.

Current and Upcoming Projects

(I started to write an e-mail to some colleagues outlining my current and upcoming projects and the e-mail was getting a bit long.  So I’m writing it all out here as perhaps some of you will be interested in one or more of these projects.)

Here are my current and upcoming projects, listed in no particular order…

  • Continue editing and submit for publication (EDUCAUSE Quarterly?) the paper (A Comparison of Student and Faculty Academic Technology Use Across Disciplines) I just presented with Allison BrckaLorenz at the AIR Forum.
  • Finish preparing for my ResNet 2010 assessment preconference session.
  • Continue working with the ResNet 2010 hosts to schedule and conduct attendee focus groups to supplement the survey data we recently collected regarding the current state and future direction of the ResNet organization.
  • Two potential AERA proposals:
    • Discourse analysis of #sachat.  I wrote a solid paper for the discourse analysis class I took in the spring but Rey Junco will be helping me to redo some of the analysis and edit the paper.
    • Historical analysis of student affairs and technology.  I have a solid draft of this paper already done (another class paper) but it’s very long and needs to be edited down to a more manageable, readable length.  Additionally, I’ve recently discovered that we have in the library stacks at Indiana University proceedings from NASPA and ACPA meetings held during the first half of the twentieth century.  I need to spend time in the library with those proceedings as I haven’t yet incorporated them into my study (I didn’t know where I could find them; I certainly didn’t expect to find them at my home institution!).
  • Begin a new project analyzing the demographics of student affairs professionals.  I wanted to use these data in my Twitter research but no one has done this work in 15 years so I’ll have to do it (I hope that I’m wrong and that I simply haven’t found a current or recent source!).
  • Wait to hear back from ASHE to know if our Wikipedia proposal has been accepted.  If so, then we need to do more work on it to update it and get it into shape for the conference later this year.

Of course, I have other things going on and coming up: quals in 2 months, ongoing projects at work, and beginning data collection for my dissertation.  I thought that summer – especially the summer after you finish coursework – was supposed to be quiet and relaxing?

Wikipedia As A Lens Into Public Perception of American Higher Education

A few weeks ago, a colleague (Chris Medrano) and I submitted a paper to the 2010 ASHE conference. The paper is a content analysis of Wikipedia articles covering American colleges and universities.  Chris and I believe that we – higher education scholars, administrators, and policy makers – can learn a lot about what the general public believes is important and interesting in higher education by analyzing Wikipedia articles about individual colleges and universities.

I hope this paper is accepted (otherwise I wouldn’t have submitted it!) but I know it’s a bit “out there.”  Despite my apprehension, I firmly believe that we must be mindful of how the public perceives higher education and the explosion of information available on the Internet provides an incredibly rich source of information if we can figure out how to harness it (In this vein, I am extraordinarily happy and grateful to have had the opportunity to study web content analysis and computer-mediated discourse analysis, giving me some of the necessary background and tools to study these data!).  And given that (almost?) every significant college and university in the United States has a Wikipedia article that (theoretically) lies largely outside the control of the institutions, these articles are a rich source of public opinion.

I know what some of you are thinking: Wikipedia editors don’t represent the general public!  I’m not entirely convinced that is true – especially without data – but I’ll concede the point anyway.  Even if those editing the articles are not representative of the general public, surely we can agree that the information placed in these articles clearly indicates what the general public is learning about these institutions from Wikipedia.  So it’s still important to know what’s going on in these articles.

Since we submitted our paper, Wikipedia articles have gotten another boost in visibility and importance: Facebook is making heavy use of Wikipedia articles in Community pages.  This has already raised a discussion within Wikipedia (full disclosure: I’m one of the participants in the discussion) about the role (or lack thereof) Wikipedia should play given that articles are being displayed in Facebook.  More specifically, at least one institution has objected to the graphic that is being displayed in Facebook.  The topmost graphic in nearly all of these Wikipedia articles is the official seal or crest of the institution.  But most institutions have graphic identity standards that mandate the use of another set of graphics (their “wordmark”) and limit the use of the official seal or crest.  Of course, Wikipedia is not required to honor those standards and it’s pretty clear that fair use allows Wikipedia to use official seals and crests without the permission of the institutions.  This is the kind of interesting complexity about which higher education administrators and scholars should know and in which they should appropriately participate.

Love it or hate it, Wikipedia is an immense force in today’s information societies.  We don’t yet know exactly what role it plays in the college choice process but we can be certain that many people are learning about our institutions via Wikipedia.  We can not and should not control the information in Wikipedia but we should be aware of it and the communities that create, edit, and even vandalize that information.  And we should be eager to use that information to develop a better understanding of how the public views higher education and our institutions.

[August update: The proposal has been accepted.  I look forward to sharing the final paper here and at ASHE this fall.]