Student Engagement Infographic

Infographic of University of Delaware senior student participation in selected high-impact practices from NSSE 2011

Infographic of University of Delaware senior student participation in selected high-impact practices from NSSE 2011. This image links to the full-size graphic.

Last week, my colleagues and I presented the final UD First Friday Roundtable on Teaching of this semester.  We focused on “student engagement,” specifically naming the session “What Does an Engaged UD Student Look Like?”  It was a good session with lots of great discussion but right now I want to narrowly and briefly focus on two graphics that we whipped up for our supporting materials.

The first image in which you might be interested is a simple infographic showing University of Delaware student participation in some high-impact practices as extrapolated from NSSE 2011 responses.  The image to the right shows one part of the entire image to give you an idea what it looks like. This extract from the full-size image links to the a larger version of the full infographic; some of the text is too small to read even in that large image so you can also download the full-size pdf. It worked out quite well as a full-size poster and I also modified it to work as a handout for attendees.  It’s not bad for the amount of time I had to put into it although I would have liked to have done a lot more and a lot better.

For each of the six selected high-impact practices, I included not only the overall percentage of senior students who reported participating in them but also the subgroups for which there were significant (p ≤ .05) differences.  I looked at differences between students of different genders, white and non-white students, students in STEM and non-STEM disciplines, and first-generation and non-first-generation students.  If I had more time, I would have loved to have created another set of graphics illustrating the impact of these practices or some broader measure of student engagement on self-reported GPA and gains especially if these data showed what the national data tend to show is that these activities or engagement overall sometimes has more impact on different kinds of students.

2013-05-03 Engagement Roundtable word cloud The second image is a simple word cloud we used on some of our materials such as the agenda and signs.  I know that word clouds are passé (or maybe I’m the only one who thinks so) but this was a really simple and quick image for us to create.  Just as important, it was closely tied to the topic of the event as we used the text of one of the primary resources – Kuh’s 2008 AAC&U high-impact practices publication* –  we used to develop and think about the event as the input for the word cloud generated using Tagxedo.

 

* Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. Washington, DC: AAC&U.

Are High Impact Practices Available Online?

I am still wrestling with my unease with MOOCs and I think I’ve finally figured out why: High impact educational practices, as we understand them today, are unlikely at best and impossible at worst in MOOCs and other similar online environments.

First, it’s helpful to understand that “high impact practice” (HIP) is a term of art.  Although the phrase is probably very common, in the past ten years or so the term has taken on special significance in U.S. higher education.  Popularized by George Kuh and emerging partly from research using data from the National Survey of Student Engagement (NSSE), this phrase has come to mean a particular set of activities that many higher education researchers believe are especially effective in promoting important and lasting changes in undergraduate students: First-Year Seminars and Experiences, Common Intellectual Experiences (i.e. core curricula), Learning Communities, Writing-Intensive Courses, Collaborative Assignments and Projects, Undergraduate Research, Diversity/Global Learning, Service Learning, Community-Based Learning, Internships, and Capstone Courses and Projects.

Unfortunately, we sometimes place too much focus on these particular activities without understanding why these activities have a high impact.  As originally described by Kuh in 2007, these practices share six characteristics:

  1. HIPs “demand that students devote considerable amounts of time and effort to purposeful tasks (p. 7)”
  2. HIPs place students in circumstances that require they “interact with faculty and peers about substantive matter (p. 7)”
  3. HIPs greatly increase the likelihood that students will interact with people who are different from themselves
  4. HIPs provide students with very frequent – sometimes continuous – feedback from faculty and peers
  5. HIPs require students to operate in intellectually complex ways by connecting knowledge in different courses and applying it in different contexts e.g. confronting complex real-world issues, investigating unfamiliar research problems
  6. HIPs occur in the context of a “coherent, academically challenging curriculum (p. 8)”

I am particularly interested in focusing on these characteristics of high impact practices as I will be helping lead a discussion on my campus next month focused on student engagement.  Most of the participants will be faculty and much of our focus will be on activities that faculty are using or can use in their curricula to promote student engagement.  Given that focus, I don’t think it would be helpful to focus on the specific activities identified as HIPs as those are often beyond the resources and purview of an individual faculty member.  Instead, we will focus on why those activities have a high impact so we can apply those principles to the activities within the power and resources of individual faculty.

That is what was on the forefront of my mind when I “attended” an EDUCAUSE Learning Initiative (ELI) online conference last week that focused on MOOCs.  The conference had some very active discussions among participants and as I participated in those discussions it occurred to me that one of the primary reasons I am uncomfortable with MOOCs is that it is difficult or impossible to apply much of what we know about good teaching in that environment.

Look back up at those six principles of high impact practices.  How do we do apply those principles in a MOOC?  More pointedly, can we apply those principles in a MOOC?  I despair that the answer is mostly “no.”  I pray that it is a simple lack of imagination on my part, a misunderstanding of what we can do in a MOOC, or that this is a fatal flaw of the dominant MOOC model that others will quickly recognize and fix or use to abandon that model.  I also confess that I don’t completely understand all of the discussions about “xMOOCs” and “cMOOCs” on anything but a very theoretical and abstract level and I have a sneaky suspicion that I’m missing something very important in how cMOOCs address some of these principles.

There is another interesting and hopeful way to think about this.  Another ELI conference attendee – I’m sorry that I don’t remember who – suggested that there may be other paradigms of effective educational practices that MOOCs might better fit.  Although I am a little bit skeptical that our understanding of effective education is going to be radically upended, this recommendation to not be too constrained by our current thinking is a very good one.  In fact, that is one important reason why I will be trying to steer our discussion here on my campus next month away from the specific activities and toward the broader principles so we can compare our thinking about student engagement with that of others’.  The idea isn’t to impose the model on my campus but to use it as a common starting point that must be adapted to our unique needs and resources.

That, of course, is what we’ll need to do with MOOCs: Use our best understanding of effective teaching and shape it to this unique environment with unique affordances.  I don’t know how to do that and I don’t know if that is what is being done.  I am wary that much of what is being done is not methodical and not built on what we know about how people learn.  I am especially skeptical that we can provide the kind of demanding and socially and intellectually connected experiences that we know provide some of the best learning.  I hope that people smarter than I are figuring this out, though, and working out how MOOCs can provide high impact educational practices.

New NSSE Survey and Technology Questions

I’m super excited that my colleagues have finally made the new version of the National Survey of Student Engagement (NSSE)  publicly available!  We’ve spent a lot of time working on this over the past 3-4 years, including focus groups, interviews, two pilot administrations, tons of literature review and data analysis, (seemingly) thousands of meetings, and many other events and undertakings.  I’ve been incredibly lucky to have been part of this process from nearly the beginning as I’ve learned a lot about survey development and project management.  I’m leaving NSSE at the end of the month so although I won’t be here when the new survey is administered next spring I’m still happy to be here to see the final version.

I’m particularly excited that the technology module (optional set of questions) has made it through all of our testing and will be part of the new survey.  There are other cool modules but this one has been my baby for over two years.  My colleagues here at NSSE – Allison and Heather – and my colleagues at EDUCAUSE – Eden and Pam – have been wonderful collaborators and I hope that they have had half as much fun and fulfillment working on these questions as I did.  It’s poignant to have spent so much time on this project only be handing it off to others just as it sees the light of day but I know it’s in good hands.  I am very hopeful that a significant number of institutions will choose to use this module and we will continue to continue to what we know about the role and impact of technology in U.S. and Canadian higher education.

Throughout all of this, I’ve remained especially thankful to have been so involved in the development of this new survey as a graduate student. Although I work half as many hours as the full-time doctorate-possessing research analysts, they have been very open about allowing me to be involved and never shied away from adding me to projects and giving me significant responsibilities.  I was never treated as “just a grad student” or a junior colleague, just one that worked fewer hours and had some different responsibilities.  Consequently, I had genuine responsibilities and made significant, meaningful contributions; I can honestly point to the survey and see my own fingerprints on some parts of it!  When I speak about meaningful educational experiences in the future, I’ll certainly think of this one as an excellent example.  And I will work to ensure that my students and colleagues can have similar experiences that allow them to learn, grow, and meaningfully contribute by performing important work with trust and support.

Media Spin and Attention Grabbing Headlines

The Washington Post published a story yesterday describing some research that says that college students today study less than college students in the past.   The story is largely based on a tiny bit of NSSE data that we first published several months ago describing self-reported time spent studying as it differs across majors.  At the moment, I’m less interested in the data and more interested in how it’s being reported and described.

First, I’m a bit amused that this is suddenly a hot topic given that the information was released 6 months ago.  In fact, it was covered very prominently in November by little-known websites like the New York Times, USA Today, and Chronicle of Higher Education.  I don’t know why the Post decided to write a story about this now (I suspect it has to do with an upcoming conference of higher education researchers, a conference heavily attended by my NSSE colleagues and one at which we frequently present new research).  But it’s amusing and informative that one story written by the Washington Post has set off a flood of blog posts and “news stories” about something that is old news.  Yes, I know that it’s still interesting and pertinent information but this seems to reinforce the sad fact that many blogs and “news sites” are very dependent on traditional media for content, even when that content has been available for months.

Second, I’m amused and saddened by the headlines that people are using to describe this research.  I know that many of the websites listed below are second- or third-rate and use headlines like these just to get attention (which drives up traffic and ad revenue – and which makes me a bit ashamed to be adding to their traffic and ad revenue!) but it still makes me sad.  Some example:

  1. Is college too easy? As study time falls, debate rises.”  This is the original Washington Post article.  It has a fairly well balanced headline.  It’s not over-the-top and it even notes that the issue is not settled as people debate it.
  2. Is College Hard? Students Are Studying Less, Says Survey”  The Huffington Post’s headline isn’t too far from the one used by the Washington Post.  Although I loathe the Huffington Post and how the vast majority of its content is blatantly derivative and unoriginal, this is a decent little summary of the Washington Post article and an alright headline.
  3. Laid-Back Higher Ed” This is how The Innovation Files describes the Washington Post article and the research it describes.  Not horrible but not very good either.  At least it’s not as bad as…
  4. Fun Time Is Replacing Study Time in College” I don’t know anything about FlaglerLive.com but based on this ridiculous and inaccurate headline and blog post I won’t be spending any time there.  I’m particularly impressed by the figure that they copied directly out of the NSSE 2011 Annual Results that they claim is “© FlaglerLive.”  Classy.

 

Thumbs Down for CBS News NSSE Article

There are many different angles one could take in reporting on the 2011 NSSE Annual Results; it’s a dense 50-page report. I know that every group has its own agenda and every reporter has his or her own personal interests but it’s very disappointing that CBS News chose the snide headline “Business majors: College’s worst slackers?” for their article. In an ordered list, something must be last. In this case, some major must rank last in the number of hours students typically study each week. But to label that group of students “slackers” simply because they fall at the bottom of the list is unnecessarily mean and unprofessional.

Fun Time of Year: NSSE Annual Results Released

The 2011 NSSE Annual Results were released today. I don’t want to focus on the content of the report in this blog post. Instead, I am briefly noting how fun it is to work on a project with a large impact that regularly receives attention from the press (even if some of the attention is sometimes negative, a very interesting experience itself). It’s gotten more fun each year as I’ve become more involved in much of what we do; this year I directly contributed by writing part of the report itself. Yes, it’s ego-boosting to see my work in print but more importantly it helps address a very serious and difficult problem that vexes many researchers and administrators in higher education: It’s hard to explain to others, especially our parents and extended families, what we do. Instead of trying to convince them that I really have graduated (several times!) and am not wasting my whole life in college, I can send them the report and articles from the New York Times and USA Today and say, “Look – this is what I do!”

Now I get to watch media reports and subsequent discussions to see how they play out and what they will emphasize. This process is unpredictable and it has surprised me in previous years when relatively small bits of information have caught on to the exclusion of other interesting and important information. As The Chronicle of Higher Education notes, this year may be a bit different given recent events but who knows how things will play out.

Student Engagement and Technology

This post is a rehearsal of part of a presentation in which I’m participating in a few weeks at ELI.  The presentation is entitled “Using NSSE and FSSE to link technology to student learning and engagement” and I’ll be giving it with one my colleagues here at Indiana University’s Center for Postsecondary Research, Amy Garver.

The relationship between student engagement and technology is a hot topic right now.  The current issue of EDUCAUSE Quarterly focuses on this relationship.  Both the National Survey of Student Engagement (NSSE) and the Community College Survey of Student Engagement (CCSSE) have focused on technology.  NSSE most recently published technology-related findings in 2008 and 2009 Annual Results (CCSSE followed suit in 2009) but we’ve poked at this topic several times in the past ten years.

In general, every time we’ve examined this relationship we find it to be positive.  The relationship isn’t always terribly strong but it’s positive and significant*.  More importantly, this relationship appears to persist no matter what we throw into the mix.  We’ve tried many different things (“controls”) to see if there is something tricky going on, such as a complex relationship with other variables.  For example, it’s possible that students from more affluent backgrounds both use technology more often and score higher on our measurements of engagement because they had better schooling.  But that doesn’t appear to be the case.  At the moment, however, it appears simply that “technology is good.”

That conclusion is neither satisfying nor likely.  It’s not satisfying because it seems very shallow and not at all explanatory (e.g. it doesn’t tell us what it is about “technology” that encourages more engagement and better learning).  It’s not likely because several decades of research has told us that it doesn’t matter which medium we use to deliver education (Clark, Yates, Early, & Moulton (2009), available as a pre-print, is an excellent overview of this body of research).

So if we don’t accept the overly-simple statement that “technology is good,” what do we do?  We did two things.  First, we focused on some specific technologies so we could move beyond broad conceptions of technology and look at some tools currently in use.  Despite the excellent research that tells us that technology itself should not have an impact, we must keep an open mind and explore that possibility, especially as technology advances and becomes more complex and ubiquitous.  Second, we asked faculty participating in the Faculty Survey of Student Engagement (FSSE) a nearly identical set of questions as we were asking the students participating in NSSE in the spring of 2009.  We even convinced 18 institutions to administer both sets of questions!  We wanted to draw faculty directly into the mix because the most likely explanation for our repeated finding of “technology is good” is that use of technology is associated with good teaching.  (That hypothesis also seemed to tentatively arise from one of our studies of distance learners, a study that didn’t seem to do much to cut through the clutter despite using sophisticated methodology.)

We presented some of our results at POD’s 2009 conference in Houston.  As mentioned above, we’ll be presenting some more at ELI’s 2010 conference in Austin.  And we’ll be presenting again at AIR in Chicago in a few months.  These are all different presentations focusing on different aspects of our data.  And there is still data we haven’t yet analyzed and presented!

I’m sorry that I haven’t give you any answers in this blog post.  We’re still working to find them and so far it’s been devilishly difficult.  It’s probably hard for us because our tools – voluntary, self-administered surveys administered to massively large groups of students and faculty – are blunt objects with limited capabilities.  And every answer we find raises more questions.  But it’s clear that there is positive relationship between student use of technology and student engagement, even if the relationship is more complex than it appears on the surface.

* – Statistical significance is tricky for us.  Our data sets are enormously large and since significance is sensitive to sample size a whole lot of things are significant.  So we often turn to other measures such as effect size and other contextual indicators to make sense of our data.