This post is a rehearsal of part of a presentation in which I’m participating in a few weeks at ELI. The presentation is entitled “Using NSSE and FSSE to link technology to student learning and engagement” and I’ll be giving it with one my colleagues here at Indiana University’s Center for Postsecondary Research, Amy Garver.
The relationship between student engagement and technology is a hot topic right now. The current issue of EDUCAUSE Quarterly focuses on this relationship. Both the National Survey of Student Engagement (NSSE) and the Community College Survey of Student Engagement (CCSSE) have focused on technology. NSSE most recently published technology-related findings in 2008 and 2009 Annual Results (CCSSE followed suit in 2009) but we’ve poked at this topic several times in the past ten years.
In general, every time we’ve examined this relationship we find it to be positive. The relationship isn’t always terribly strong but it’s positive and significant*. More importantly, this relationship appears to persist no matter what we throw into the mix. We’ve tried many different things (“controls”) to see if there is something tricky going on, such as a complex relationship with other variables. For example, it’s possible that students from more affluent backgrounds both use technology more often and score higher on our measurements of engagement because they had better schooling. But that doesn’t appear to be the case. At the moment, however, it appears simply that “technology is good.”
That conclusion is neither satisfying nor likely. It’s not satisfying because it seems very shallow and not at all explanatory (e.g. it doesn’t tell us what it is about “technology” that encourages more engagement and better learning). It’s not likely because several decades of research has told us that it doesn’t matter which medium we use to deliver education (Clark, Yates, Early, & Moulton (2009), available as a pre-print, is an excellent overview of this body of research).
So if we don’t accept the overly-simple statement that “technology is good,” what do we do? We did two things. First, we focused on some specific technologies so we could move beyond broad conceptions of technology and look at some tools currently in use. Despite the excellent research that tells us that technology itself should not have an impact, we must keep an open mind and explore that possibility, especially as technology advances and becomes more complex and ubiquitous. Second, we asked faculty participating in the Faculty Survey of Student Engagement (FSSE) a nearly identical set of questions as we were asking the students participating in NSSE in the spring of 2009. We even convinced 18 institutions to administer both sets of questions! We wanted to draw faculty directly into the mix because the most likely explanation for our repeated finding of “technology is good” is that use of technology is associated with good teaching. (That hypothesis also seemed to tentatively arise from one of our studies of distance learners, a study that didn’t seem to do much to cut through the clutter despite using sophisticated methodology.)
We presented some of our results at POD’s 2009 conference in Houston. As mentioned above, we’ll be presenting some more at ELI’s 2010 conference in Austin. And we’ll be presenting again at AIR in Chicago in a few months. These are all different presentations focusing on different aspects of our data. And there is still data we haven’t yet analyzed and presented!
I’m sorry that I haven’t give you any answers in this blog post. We’re still working to find them and so far it’s been devilishly difficult. It’s probably hard for us because our tools – voluntary, self-administered surveys administered to massively large groups of students and faculty – are blunt objects with limited capabilities. And every answer we find raises more questions. But it’s clear that there is positive relationship between student use of technology and student engagement, even if the relationship is more complex than it appears on the surface.
* – Statistical significance is tricky for us. Our data sets are enormously large and since significance is sensitive to sample size a whole lot of things are significant. So we often turn to other measures such as effect size and other contextual indicators to make sense of our data.