New NSSE Survey and Technology Questions

I’m super excited that my colleagues have finally made the new version of the National Survey of Student Engagement (NSSE)  publicly available!  We’ve spent a lot of time working on this over the past 3-4 years, including focus groups, interviews, two pilot administrations, tons of literature review and data analysis, (seemingly) thousands of meetings, and many other events and undertakings.  I’ve been incredibly lucky to have been part of this process from nearly the beginning as I’ve learned a lot about survey development and project management.  I’m leaving NSSE at the end of the month so although I won’t be here when the new survey is administered next spring I’m still happy to be here to see the final version.

I’m particularly excited that the technology module (optional set of questions) has made it through all of our testing and will be part of the new survey.  There are other cool modules but this one has been my baby for over two years.  My colleagues here at NSSE – Allison and Heather – and my colleagues at EDUCAUSE – Eden and Pam – have been wonderful collaborators and I hope that they have had half as much fun and fulfillment working on these questions as I did.  It’s poignant to have spent so much time on this project only be handing it off to others just as it sees the light of day but I know it’s in good hands.  I am very hopeful that a significant number of institutions will choose to use this module and we will continue to continue to what we know about the role and impact of technology in U.S. and Canadian higher education.

Throughout all of this, I’ve remained especially thankful to have been so involved in the development of this new survey as a graduate student. Although I work half as many hours as the full-time doctorate-possessing research analysts, they have been very open about allowing me to be involved and never shied away from adding me to projects and giving me significant responsibilities.  I was never treated as “just a grad student” or a junior colleague, just one that worked fewer hours and had some different responsibilities.  Consequently, I had genuine responsibilities and made significant, meaningful contributions; I can honestly point to the survey and see my own fingerprints on some parts of it!  When I speak about meaningful educational experiences in the future, I’ll certainly think of this one as an excellent example.  And I will work to ensure that my students and colleagues can have similar experiences that allow them to learn, grow, and meaningfully contribute by performing important work with trust and support.

Media Spin and Attention Grabbing Headlines

The Washington Post published a story yesterday describing some research that says that college students today study less than college students in the past.   The story is largely based on a tiny bit of NSSE data that we first published several months ago describing self-reported time spent studying as it differs across majors.  At the moment, I’m less interested in the data and more interested in how it’s being reported and described.

First, I’m a bit amused that this is suddenly a hot topic given that the information was released 6 months ago.  In fact, it was covered very prominently in November by little-known websites like the New York Times, USA Today, and Chronicle of Higher Education.  I don’t know why the Post decided to write a story about this now (I suspect it has to do with an upcoming conference of higher education researchers, a conference heavily attended by my NSSE colleagues and one at which we frequently present new research).  But it’s amusing and informative that one story written by the Washington Post has set off a flood of blog posts and “news stories” about something that is old news.  Yes, I know that it’s still interesting and pertinent information but this seems to reinforce the sad fact that many blogs and “news sites” are very dependent on traditional media for content, even when that content has been available for months.

Second, I’m amused and saddened by the headlines that people are using to describe this research.  I know that many of the websites listed below are second- or third-rate and use headlines like these just to get attention (which drives up traffic and ad revenue – and which makes me a bit ashamed to be adding to their traffic and ad revenue!) but it still makes me sad.  Some example:

  1. Is college too easy? As study time falls, debate rises.”  This is the original Washington Post article.  It has a fairly well balanced headline.  It’s not over-the-top and it even notes that the issue is not settled as people debate it.
  2. Is College Hard? Students Are Studying Less, Says Survey”  The Huffington Post’s headline isn’t too far from the one used by the Washington Post.  Although I loathe the Huffington Post and how the vast majority of its content is blatantly derivative and unoriginal, this is a decent little summary of the Washington Post article and an alright headline.
  3. Laid-Back Higher Ed” This is how The Innovation Files describes the Washington Post article and the research it describes.  Not horrible but not very good either.  At least it’s not as bad as…
  4. Fun Time Is Replacing Study Time in College” I don’t know anything about FlaglerLive.com but based on this ridiculous and inaccurate headline and blog post I won’t be spending any time there.  I’m particularly impressed by the figure that they copied directly out of the NSSE 2011 Annual Results that they claim is “© FlaglerLive.”  Classy.

 

Please Step Away From the Infographic!

I’ve tried very hard to be nice but I can’t bite my tongue any longer: Please, stop it with the infographics.  Most of them are bad.  If I were still a bratty 15-year old, I would dryly say that “I feel dumber for having read that” after seeing most infographics.  But I’ll be more professional and offer some specific criticisms.

Most infographics:

  1. Obliterate nuance and ignore subtleties and differences by carelessly aggregating many different sources of information.  By no means am I opposed to integrating knowledge and synthesizing data from multiple sources!  But it must be done carefully because it’s rare that different studies or sources of data align well.  When it’s done carelessly we can draw false conclusions.  These problems compound as more sources are thoughtlessly tossed together until we’re saying things that we simply don’t know are true.
  2. Don’t tell us where the data come from.  Sure, many infographics have a list of sources at the bottom.  But most of the time that’s all we get: An unordered list that doesn’t tell us which bits of information came from which sources.  I guess that kind of list is better than nothing, but not by much.  This is quite puzzling and frustrating because it seems like such an easy thing to fix.  Infographics designers, please look up “footnotes” and “endnotes” because this is a problem we solved a long time ago.
  3. Don’t need to exist in the first place because the “graphics” add nothing to the “information” being conveyed.  I know that infographics are the hip, new thing (I know they’re neither hop nor new – play along because many people still believe that!) but if your message can be better communicated through a different medium then you’re hurting yourself and impeding your message by forcing it into an unhelpful series of “graphics.”

Of course, I’m not the first one to whine about the infographic plague.  For example, Megan McArdle is spot on when she notes that most infographics are created by hacks who haven’t done any research or produced anything useful but want to convince you that they’re experts so you’ll hire them or buy something from them.  I’m also sure that someone has eviscerated the banal characteristics of the infographic genre (e.g. color palette lifted straight from the early-mid 2000s Web 2.0 explosion, percentage values liberally scattered about in large fonts).

A great (?) example of a terrible infographic is this one recently published by Mashable.  It meets all three of the criteria listed above.  Sadly, most infographics I’ve seen meet at least two if not all three of those criteria.

But not all infographics are terrible.  It’s very simple but this one recently published by Bloomberg is effective and informative.   The infographic that is displayed when you click on the “Cost to students & school” button on the left is ok.  But the bar graphs displayed when you click on the “Conference comparison” button are very informative and useful.

Before you make your next infographic or start passing around a link to an infographic, please consider whether the infographic avoids the three pitfalls listed above.  If it doesn’t, please step away from the infographic!

Item Non-response and Survey Abandonment SPSS Syntax

I don’t often write about what I do in my day-to-day job.  But I’ve recently spent quite a bit of time working on survey item non-response and survey abandonment and I want to save you some time if you’re working on those issues, too.

One of the projects on which I’ve worked over the last couple of years is the development of an updated version of the National Survey of Student Engagement (NSSE) survey instrument. We’ve done a lot – a LOT – of work on this.  As part of this work we’ve pilot tested the draft versions of the new survey.  Some of the many things we’ve analyzed in the pilot data are item non-response and survey abandonment.  I worked on this last year with the first pilot and when I worked on this again with this year’s pilot I got smarter.  Specifically, I wrote an Excel macro that generates the SPSS syntax necessary to analyze item non-response and survey abandonment.

As described in the Excel file, this macro takes a list of survey variable names and creates SPSS syntax that will add several new variables to your SPSS file:

  • A “Abandoned” variable indicating the last question the respondent answered if he or she abandoned the survey. If the respondent didn’t abandon the survey, this variable will be left empty (“SYSMIS”).
  • For every variable, a “SkippedItem__” variable indicating if the survey item was answered, skipped, or left blank because the survey was abandoned.
  • A “SkippedItems” variable indicating the total number of questions the respondent skipped.
  • A “SkippedPercentage” variable indicating the percentage of questions the respondent skipped.
  • A “AbandonedPercentage” variable indicating the percentage of questions the respondent did not answer because he or she abandoned the survey.

I created this macro because there were several versions of the pilot instrument.  Because you have to “work backward” through each question to identify respondents who abandoned the survey, each version of the instrument required a different set of SPSS syntax because each version had a different set of survey questions.  So it was much easier for me to write a program that generates the appropriate syntax then to do it by hand multiple times.  Laziness is a virtue.

Warning: This macro generates a lot of syntax.  The sample input has only four variables but it creates code with 105 lines (including blank lines and comments).  The surveys with which I was working had 130-160 variables and I worked with 11 different versions of the survey instrument.  In the end, I had an SPSS syntax file with tens of thousands of lines of code.  The SPSS syntax editor got very grumpy and slow, probably because of the large number of DO IF conditionals and the syntax highlighting it applies to those blocks of code.  I ended up working mostly in Notepad as I was troubleshooting the syntax and pasting the resulting text into the SPSS syntax editor only when I was ready to run it.  The good news is that the syntax is actually very straight-forward and arithmetically simple so it ran fairly quickly.

I know that this fills a very, very small niche.  But maybe someone will find this helpful or useful.  I spent a few days working on this so there’s no reason why someone else should have to redo this work.

Warning 2: I used this macro again a few years later and noticed that it’s set up to only deal with numeric data. If you have any string data then you’ll need to modify it accordingly.

Thoughts on Backward Design

 This post will be less organized than most posts; some of these thoughts and ideas are still a little raw.

Backward design – the method by which one begins with the desired end result(s) of an educational program, determines acceptable evidence showing that the result(s) has been achieved, and then creates a plan to teach the skills and content that will lead students to provide that evidence – has been on my mind lately.  It’s one of the core concepts of a college teaching and learning course I co-teach but that’s not why I’ve been thinking about it.

For me, backward design is a “threshold concept;” it’s an idea that changed how I think about teaching and I can’t go back to how I thought prior to this change.  So although I learned and most often use and teach backward design in the context of designing or redesigning a single college course, I’ve been thinking about the role of backward design in different contexts.  For example:

  • I know that backward design has been and is used to develop curricula and not just individual courses.  Today was the first time I got to see firsthand how that plays out with a group of faculty to develop a full 4-year curriculum for this discipline.  I was most struck by how difficult it was to keep true to the backward design philosophy and not get mired down in content coverage and the limitations imposed by the current curriculum.  It was difficult even for me to remain on course as I tried to help facilitate one of the groups of faculty engaged in this process.  I underestimated the increased complexities involved in scaling up the process from a single course to an entire curriculum; it’s not a linear function.
  • There has been quite a bit of discussion lately among student affairs professionals regarding their conference presentations (e.g. this Inside Higher Ed blog post with 30 comments).  Put bluntly, many people are unsatisfied with the current state of these presentations.  Just as backward design can scale up from a class to a curriculum, it can also scale down to a single class session.  And shouldn’t a good 50 minute conference presentation resemble a good 50 minute class session?  So why not systematically apply backward design to conference presentations?  Many conferences seem to try to push presenters in that direction by requiring them to have learning outcomes for their sessions but that isn’t enough.
  • Unfortunately, pedagogy and good teaching practices are not formally taught and emphasized in most student affairs programs so I expect that most student affairs professionals have not been exposed to backward design as a formal process.  That’s a shame because it seems like such a good fit for what student affairs professionals do!  And it fits in so well with the ongoing assessment movement because it so firmly anchors design in measurable outcomes and evidence-based teaching!

Would any student affairs professionals out there want to learn more about backward design and try to apply it to some of your programs?  Please let me know because I’d love to help!  I’m positive this would work out well and I’d love to test these ideas!

When Did Student Affairs Begin Discussing Technology as a Competency?

At a presentation I attended at this year’s ACPA conference, the presenters discussed technology as a competency for student affairs professionals.  It’s a discussion that’s been going on for many years but I don’t know if many people – particularly younger professionals – know just how long it’s been going on.  The presenters of this particular session asserted that formal discussion of technology as a competency began in 2002.  Maybe they’re right but informally and on different levels this conversation has been ongoing for decades. To provide historical context for this discussion (and to substantiate some glib comments I made to those sitting next to me in the presentation), I skimmed through my historical documents to find the earliest occurrences of this discussion.

Although there is foreshadowing in the middle of the 20th century of calls for technology competency in student affairs professionals, the first explicit calls I found begin in the middle of the 1970s.  In “Dealing with the Computer,” Penn (1975) asserts that “If the modern student personnel administrator expects to provide leadership and to have an impact on his or her campus, it will be necessary to understand computers and to communicate with computer technicians” (p. 56).  He goes on to write that “the functioning of computers is still a mysterious process to many individuals” (p. 56) before going on to define and briefly discuss topics such as “hardware” and “software.”  Similarly, Peterson’s 1975 NASPA Journal article “Implications of the New Management Technology” recommends that student affairs professionals not only “familiarize [themselves] with [their] institution’s data base, its automated technology, the major administrative analytic offices, and the major reports they generate” (p. 169) but they also “develop [their] own capacity to assess, analyze, and/or use some of the more basic data sources at your disposal” (p. 169).

By the 1980s, technology as a competency was a clear concern for student affairs professionals in the U.S. In the mid 80s, several student affairs departments were engaged or interested in increasing the computer literacy and comfort of their staff (e.g. Barrow & Karris, 1985; Bogal-Allbritten & Allbritten, 1985).  In a 1983 survey of 350 student affairs departments at 2-year colleges (with 141 respondents), the second need most frequently expressed by chief student affairs officers (CSAOs) was “information about basic computer functions, computer literacy, and how to write microprograms” (Floyd, 1985, p. 258).  In 1987, Whyte described the results of a similar survey of 750 colleges and universities (with 273 respondents):

Many student affairs professionals have expressed mixed emotions regarding computerization in the educational realm. There seems to be a need for direction regarding how to coordinate computerized management, instruction, and evaluation capabilities into a meaningful, comprehensive package to assist students….Coordination of the fragmented computerization efforts of most student affairs offices into a comprehensive plan is the next logical step. (p. 85)

In describing the “Three Rs” of recruitment, referral, and retention, Erwin and Miller (1985) wrote that “to meet the changing times and increased demands for excellence, student service professionals must look for new tools to assist in problem solving. Administrators will find management information systems particularly useful…” (p. 50).  Finally, MacLean (1986) explicitly calls for computer technology (then referred to as “management information systems”) to become “integral parts of all student affairs offices and departments” (p. 5).

Calls for student affairs professionals to develop and increase their knowledge of and comfort with computer technology are decades old.  Even a quick glance through my limited resources shows implicit and explicit calls beginning in the 1970s and blossoming in the 1980s as (micro-)computers became widely available and mainstream.  The discussion has changed tenor and intensity as technology has become more intertwined with our lives but the discussion itself is not new and dates back at least 35-40 years.

References

Barrow, B. R., & Karris, P. M. (1985). A hands-on workshop for reducing computer anxiety. Journal of College Student Personnel, 26(2), 167–168.

Bogal-Allbritten, R., & Allbritten, B. (1985). A computer literacy course for students and professionals in human services. Journal of College Student Personnel, 26(2), 170–171.

Erwin, T. D., & Miller, S. W. (1985). Technology and the three rs. NASPA Journal, 22(4), 47–51.

Floyd, D. L. (1985). Use of computers by student affairs offices in small 2·year colleges. Journal of College Student Personnel, 26(3), 257–258.

MacLean, L. S. (1986). Developing MIS in student affairs. NASPA Journal, 23(3), 2–7.

Penn, J. R. (1976). Dealing with the computer. NASPA Journal, 14(2), 56–58.

Peterson, M. (1975). Implications of the new management technology. NASPA Journal, 12(3), 158–170.

Whyte, C. B. (1987). Coordination of computer use in student affairs offices: a national update. Journal of College Student Personnel, 28(1), 84–86.

“Best” Practices?

In a recent blog post releasing a (very nice!) infographic about “Best Practices in Using Twitter in the Classroom Infographic,” Rey Junco writes:

I’d like to point out that I’m a real stickler about using the term “best practices.” It’s a concept we toss around a lot in higher education. To me, a “best practice” is only something that has been supported by research. Alas, most of the time that we talk about “best practices” in higher ed, we’re focusing on what someone thinks is a “good idea.”

I agree and I’m even more of a stickler. There have been several specific situations in which I have been asked or encouraged to write a set of best practices for different things but I always got stuck asking myself: What makes this particular set of practices the “best?” I share Rey’s dislike of “good things I’ve done” being presented as best practices. But my (relatively minor) frustration extends a bit further because to me the adjective “best” implies comparison between different practices i.e. there is a (large) set of practices and this particular subset has been proven to be better than the rest.

I’d be perfectly happy if people were to stop telling us about best practices and just tell us about “good” practices until we have a large enough set of practices and data to judge which ones really are the best. If you’ve done good work, don’t distort or dishonor it by trying to make it bigger than it is. After all, even Chickering and Gamson (1987) presented their (now-classic and heavily-cited) ideas as “Seven Principles for Good Practice in Undergraduate Education” and not “Seven Best Practices in Undergraduate Education.”

Additional (older) #SAchat data: Participation, Geography, and Gender

In a comment to my previous post sharing some of my thoughts about #sachat in advance of their “State of #SAchat” discussion tomorrow, Gary Honickel asked about demographics of #sachat participants.  In our forthcoming chapter (I’m not trying to advertise it – honest! Just trying to explain why I have all of this information. I’m a researcher, not a stalker!), Laura Pasquini and I analyze #sachat and we include some information about the participants.  We didn’t include the specific information Gary asked about: gender and geographic location of participants.  But I did collect that data and although it’s for three sessions that occurred last year maybe this is still useful or helpful.  My sense is that these things haven’t changed much in the past year.

Keep in mind that these data come from three 2011 chat sessions:

Date Topic Participants Messages Average messages/participant Standard deviation messages/participant
March 10, 2011 Beyond the Conference: Networking When You Aren’t Attending a National Conference 70 442 6.3 6.5
June 2, 2011 Intentional Recruiting to the Field: Responsibilities and Liabilities 83 442 5.3 5.3
June 30, 2011 Creative Orientation Approaches and Ideas 45 323 7.2 10.2

The thing that jumps out at me in the table above are the average number of messages per participant and the standard deviation of that number.  There is immense variance in the number of messages posted by each participant and that makes me wonder about the pattern(s) of participation for each session.  The histogram below showing how many people posted a particular number of messages in each chat helps us understand these numbers (click on it to view a larger version).

This histogram is a classic “long tail” distribution, showing us that most participants in these three #sachat sessions posted very few messages and only a handful of participants posted many messages; the participant with the most messages is, of course, the moderator.  This is a very typical situation and an unsurprising finding.

This gives us a broad understanding of #sachat participation but let’s look a bit deeper and explore two different ways of classifying participants: gender and geography. First, a few words of caution: these data were inferred from the Twitter profiles and messages posted by these participants.  Geography was the easier datum to capture for each participant as most participants associated themselves with a particular college or university, either in their profile or in their introduction during one or more #sachat sessions.  Gender was much more difficult and I present these data with trepidation because there was a significant amount of guesswork involved in classifying participants as male or female.  If this were anything more than a one-off blog post or if gender were a central concern for this or any other analysis, I wouldn’t even share or use these data because inferring gender from name and photo obviously lacks rigor.

This chart shows the geographic locations of the participants in these three #sachat sessions (I used the U.S. Census geographic regions to aggregate the data).  Nothing surprising here.  #SAchat is indeed U.S.-dominated but even that isn’t a surprise.  Nothing particularly interesting is discovered if you look at the number of messages posted by participants from each region; the numbers get very small very quickly when slicing the data this many ways so it’s not worth trying to display.

 

What about gender?  For at least these three sessions, the gender breakdown seems to be about even.  Like geographic region, nothing terribly interesting happens if you slice these numbers in different ways.

So what do we make of all of this?  I think it shows that – for these three sessions – there was considerable diversity among #SAchat participants, at least in two ways we can measure. Of course, these are coarse (and in the case of gender, potentially problematic) measures and there are many other ways in which we might examine the makeup and diversity of this population.  Functional area and role (student, entry-level professional, faculty, etc.) are two measures that jump to mind as interesting and useful.  (Incidentally, I tried to classify participants using those two measures in a previous study; it was difficult, time-consuming, and very incomplete since those data are not spontaneously volunteered by all participants.)

Are #sachat participants diverse enough?  I don’t know.  How do we define “diverse enough?”  Should we be concerned about how well the #sachat population matches the larger student affairs population?  A quick glance shows some alignment between these populations but I have not done any definitive work in this area, partially because it’s very hard to obtain data about the larger student affairs population.

Of course, all of this does not and can not include anything about lurkers.  I agree that there is value in #sachat even for those who do not directly or visibly participate but we’d have to make a concerted effort to identify those people if we want to know anything about them.

I hope this is helpful or interesting!  I wish I had more up-to-date data but I don’t.  I’m job searching, working, and trying to finish a dissertation so I don’t have time or plans to gather additional data right now.  This is data that I had at hand and I am happy to share it in the hopes that it’s useful for someone.

Reflections on #sachat

Tomorrow, the members of the #sachat community will be engaging in introspection and discussing “The State of #SAchat” instead of their usual weekly discussion of topical student affairs topics.  I have been conducting research on the #sachat community for a couple of years now so I thought it might be helpful for the community if I could organize and share some of my thoughts.

I won’t spend time describing the basics of #sachat; if you are interested in this particular conversation, I assume that you are familiar with the community and its tools.  If I wrong and you are not familiar with #sachat, the official overview is here.  An annotated visualization of one chat session – a February 10, 2011 discussion about job searching – is below (my original blog post discussing this visualization has some of its background details).

The chart below shows Twitter message traffic from six hashtags – #highered, #sachat, #sadoc, #sagrad, #sajobs, and #studentaffairs – during the week of June 27, 2011.  This illustrates how #sachat differs in that it not only has consistent traffic everyday (although not as much as #highered) but it spikes during the scheduled chat session on Thursday afternoon.

In a book chapter Laura Pasquini and I have in press, we examine #sachat as a case study of informal learning using technology.  One of our conclusions is that #sachat is doing several things right to overcome the significant limitations of Twitter by:

  • Allowing participants to direct the discussions as much as practical.  For example, potential participants vote on each week’s topic and do not have to register to participate (in the voting or the actual discussion).
  • Using other tools to supplement the core use of Twitter.  Most of these tools reside on the SA Collaborative website.  One of the most important may be the chat archives that give the chats a sense of continuity and history beyond the typically ephemeral nature of Twitter.
  • Employing a well-prepared and clearly identifiable moderator in each discussion.  This account helps impose order on the Twitter chat, allowing conversation to run for a bit before drawing it back to the core topic by using clearly marked, pre-prepared questions.

We also identify several specific concerns and challenges:

  • Can the participants continue to overcome the inherent limitations of Twitter, especially its (a) short message length, (b) lack of threading, and (c) ephemerality?  Although some participants attempt to overcome the first limitation using multipart messages, this is not very successful; the 140 character limit of Twitter is one of its core features and unlikely to be overcome.  The second limitation has been addressed with some success with the use of MOD messages and Q# replies.  The third limitation has been partially overcome by regularly making transcripts of chats publicly available.
  • Is the small community of volunteers that run the chats – those who use the moderator account and the SA Collaborative website – sustainable?  These volunteers and the tools they provide and maintain are essential to the success of the community.  For how long will these volunteers sustain their energy and will there be a smooth transition as members come and go?
  • How representative of the larger student affairs community is the #sachat community?  Is that important?
  • How diverse are the members of the #sachat community?  In what ways are they diverse and in what important areas is diversity lacking?

Dissertation Journal: Chapter 2 Draft

It’s taken me about a year to reach this point but I finally completed and submitted to my chair a draft of chapter 2, the literature review.  It’s pretty solid but it’s still a draft.

  1. There are a handful of places that I know I could expand but I’ll wait until I hear back from my chair before doing that.
  2. I need to add a couple of new sources to the digital divide section but nothing significant and nothing worth delaying this draft any longer.
  3. I think that I need to add a brief section – to this chapter or another one – summarizing the assumptions of the study.  Several of them are spread throughout the lit review where I discuss and justify them but it seems that it would be more organized if I would summarize them all in one place.

Why did this take so long?  I don’t know.  It’s certainly not because I struggle with this kind of writing.  It’s definitely not because I don’t know what I want or need to write.  I imagine that it has to be some kind of emotional block, some sort of fear of failure perhaps.  That is completely uncharacteristic of me but it’s the only thing that makes sense.

Why did I finally buckle down and get this draft completed?  The shame of not having done this yet became overwhelming and I simply had to finish this so I could look my chair in the eye.  I am also running out of time; I’m already behind where I wanted to be right now and hitting the job market ABD.  I’m not very pleased to be on the market as an ABD but now my energy needs to be focused on making as much progress as possible before I land a job because the more progress I have the better the odds that I’ll finish.