New NSSE Survey and Technology Questions

I’m super excited that my colleagues have finally made the new version of the National Survey of Student Engagement (NSSE)  publicly available!  We’ve spent a lot of time working on this over the past 3-4 years, including focus groups, interviews, two pilot administrations, tons of literature review and data analysis, (seemingly) thousands of meetings, and many other events and undertakings.  I’ve been incredibly lucky to have been part of this process from nearly the beginning as I’ve learned a lot about survey development and project management.  I’m leaving NSSE at the end of the month so although I won’t be here when the new survey is administered next spring I’m still happy to be here to see the final version.

I’m particularly excited that the technology module (optional set of questions) has made it through all of our testing and will be part of the new survey.  There are other cool modules but this one has been my baby for over two years.  My colleagues here at NSSE – Allison and Heather – and my colleagues at EDUCAUSE – Eden and Pam – have been wonderful collaborators and I hope that they have had half as much fun and fulfillment working on these questions as I did.  It’s poignant to have spent so much time on this project only be handing it off to others just as it sees the light of day but I know it’s in good hands.  I am very hopeful that a significant number of institutions will choose to use this module and we will continue to continue to what we know about the role and impact of technology in U.S. and Canadian higher education.

Throughout all of this, I’ve remained especially thankful to have been so involved in the development of this new survey as a graduate student. Although I work half as many hours as the full-time doctorate-possessing research analysts, they have been very open about allowing me to be involved and never shied away from adding me to projects and giving me significant responsibilities.  I was never treated as “just a grad student” or a junior colleague, just one that worked fewer hours and had some different responsibilities.  Consequently, I had genuine responsibilities and made significant, meaningful contributions; I can honestly point to the survey and see my own fingerprints on some parts of it!  When I speak about meaningful educational experiences in the future, I’ll certainly think of this one as an excellent example.  And I will work to ensure that my students and colleagues can have similar experiences that allow them to learn, grow, and meaningfully contribute by performing important work with trust and support.

Media Spin and Attention Grabbing Headlines

The Washington Post published a story yesterday describing some research that says that college students today study less than college students in the past.   The story is largely based on a tiny bit of NSSE data that we first published several months ago describing self-reported time spent studying as it differs across majors.  At the moment, I’m less interested in the data and more interested in how it’s being reported and described.

First, I’m a bit amused that this is suddenly a hot topic given that the information was released 6 months ago.  In fact, it was covered very prominently in November by little-known websites like the New York Times, USA Today, and Chronicle of Higher Education.  I don’t know why the Post decided to write a story about this now (I suspect it has to do with an upcoming conference of higher education researchers, a conference heavily attended by my NSSE colleagues and one at which we frequently present new research).  But it’s amusing and informative that one story written by the Washington Post has set off a flood of blog posts and “news stories” about something that is old news.  Yes, I know that it’s still interesting and pertinent information but this seems to reinforce the sad fact that many blogs and “news sites” are very dependent on traditional media for content, even when that content has been available for months.

Second, I’m amused and saddened by the headlines that people are using to describe this research.  I know that many of the websites listed below are second- or third-rate and use headlines like these just to get attention (which drives up traffic and ad revenue – and which makes me a bit ashamed to be adding to their traffic and ad revenue!) but it still makes me sad.  Some example:

  1. Is college too easy? As study time falls, debate rises.”  This is the original Washington Post article.  It has a fairly well balanced headline.  It’s not over-the-top and it even notes that the issue is not settled as people debate it.
  2. Is College Hard? Students Are Studying Less, Says Survey”  The Huffington Post’s headline isn’t too far from the one used by the Washington Post.  Although I loathe the Huffington Post and how the vast majority of its content is blatantly derivative and unoriginal, this is a decent little summary of the Washington Post article and an alright headline.
  3. Laid-Back Higher Ed” This is how The Innovation Files describes the Washington Post article and the research it describes.  Not horrible but not very good either.  At least it’s not as bad as…
  4. Fun Time Is Replacing Study Time in College” I don’t know anything about FlaglerLive.com but based on this ridiculous and inaccurate headline and blog post I won’t be spending any time there.  I’m particularly impressed by the figure that they copied directly out of the NSSE 2011 Annual Results that they claim is “© FlaglerLive.”  Classy.

 

When Did Student Affairs Begin Discussing Technology as a Competency?

At a presentation I attended at this year’s ACPA conference, the presenters discussed technology as a competency for student affairs professionals.  It’s a discussion that’s been going on for many years but I don’t know if many people – particularly younger professionals – know just how long it’s been going on.  The presenters of this particular session asserted that formal discussion of technology as a competency began in 2002.  Maybe they’re right but informally and on different levels this conversation has been ongoing for decades. To provide historical context for this discussion (and to substantiate some glib comments I made to those sitting next to me in the presentation), I skimmed through my historical documents to find the earliest occurrences of this discussion.

Although there is foreshadowing in the middle of the 20th century of calls for technology competency in student affairs professionals, the first explicit calls I found begin in the middle of the 1970s.  In “Dealing with the Computer,” Penn (1975) asserts that “If the modern student personnel administrator expects to provide leadership and to have an impact on his or her campus, it will be necessary to understand computers and to communicate with computer technicians” (p. 56).  He goes on to write that “the functioning of computers is still a mysterious process to many individuals” (p. 56) before going on to define and briefly discuss topics such as “hardware” and “software.”  Similarly, Peterson’s 1975 NASPA Journal article “Implications of the New Management Technology” recommends that student affairs professionals not only “familiarize [themselves] with [their] institution’s data base, its automated technology, the major administrative analytic offices, and the major reports they generate” (p. 169) but they also “develop [their] own capacity to assess, analyze, and/or use some of the more basic data sources at your disposal” (p. 169).

By the 1980s, technology as a competency was a clear concern for student affairs professionals in the U.S. In the mid 80s, several student affairs departments were engaged or interested in increasing the computer literacy and comfort of their staff (e.g. Barrow & Karris, 1985; Bogal-Allbritten & Allbritten, 1985).  In a 1983 survey of 350 student affairs departments at 2-year colleges (with 141 respondents), the second need most frequently expressed by chief student affairs officers (CSAOs) was “information about basic computer functions, computer literacy, and how to write microprograms” (Floyd, 1985, p. 258).  In 1987, Whyte described the results of a similar survey of 750 colleges and universities (with 273 respondents):

Many student affairs professionals have expressed mixed emotions regarding computerization in the educational realm. There seems to be a need for direction regarding how to coordinate computerized management, instruction, and evaluation capabilities into a meaningful, comprehensive package to assist students….Coordination of the fragmented computerization efforts of most student affairs offices into a comprehensive plan is the next logical step. (p. 85)

In describing the “Three Rs” of recruitment, referral, and retention, Erwin and Miller (1985) wrote that “to meet the changing times and increased demands for excellence, student service professionals must look for new tools to assist in problem solving. Administrators will find management information systems particularly useful…” (p. 50).  Finally, MacLean (1986) explicitly calls for computer technology (then referred to as “management information systems”) to become “integral parts of all student affairs offices and departments” (p. 5).

Calls for student affairs professionals to develop and increase their knowledge of and comfort with computer technology are decades old.  Even a quick glance through my limited resources shows implicit and explicit calls beginning in the 1970s and blossoming in the 1980s as (micro-)computers became widely available and mainstream.  The discussion has changed tenor and intensity as technology has become more intertwined with our lives but the discussion itself is not new and dates back at least 35-40 years.

References

Barrow, B. R., & Karris, P. M. (1985). A hands-on workshop for reducing computer anxiety. Journal of College Student Personnel, 26(2), 167–168.

Bogal-Allbritten, R., & Allbritten, B. (1985). A computer literacy course for students and professionals in human services. Journal of College Student Personnel, 26(2), 170–171.

Erwin, T. D., & Miller, S. W. (1985). Technology and the three rs. NASPA Journal, 22(4), 47–51.

Floyd, D. L. (1985). Use of computers by student affairs offices in small 2·year colleges. Journal of College Student Personnel, 26(3), 257–258.

MacLean, L. S. (1986). Developing MIS in student affairs. NASPA Journal, 23(3), 2–7.

Penn, J. R. (1976). Dealing with the computer. NASPA Journal, 14(2), 56–58.

Peterson, M. (1975). Implications of the new management technology. NASPA Journal, 12(3), 158–170.

Whyte, C. B. (1987). Coordination of computer use in student affairs offices: a national update. Journal of College Student Personnel, 28(1), 84–86.

“Best” Practices?

In a recent blog post releasing a (very nice!) infographic about “Best Practices in Using Twitter in the Classroom Infographic,” Rey Junco writes:

I’d like to point out that I’m a real stickler about using the term “best practices.” It’s a concept we toss around a lot in higher education. To me, a “best practice” is only something that has been supported by research. Alas, most of the time that we talk about “best practices” in higher ed, we’re focusing on what someone thinks is a “good idea.”

I agree and I’m even more of a stickler. There have been several specific situations in which I have been asked or encouraged to write a set of best practices for different things but I always got stuck asking myself: What makes this particular set of practices the “best?” I share Rey’s dislike of “good things I’ve done” being presented as best practices. But my (relatively minor) frustration extends a bit further because to me the adjective “best” implies comparison between different practices i.e. there is a (large) set of practices and this particular subset has been proven to be better than the rest.

I’d be perfectly happy if people were to stop telling us about best practices and just tell us about “good” practices until we have a large enough set of practices and data to judge which ones really are the best. If you’ve done good work, don’t distort or dishonor it by trying to make it bigger than it is. After all, even Chickering and Gamson (1987) presented their (now-classic and heavily-cited) ideas as “Seven Principles for Good Practice in Undergraduate Education” and not “Seven Best Practices in Undergraduate Education.”

Additional (older) #SAchat data: Participation, Geography, and Gender

In a comment to my previous post sharing some of my thoughts about #sachat in advance of their “State of #SAchat” discussion tomorrow, Gary Honickel asked about demographics of #sachat participants.  In our forthcoming chapter (I’m not trying to advertise it – honest! Just trying to explain why I have all of this information. I’m a researcher, not a stalker!), Laura Pasquini and I analyze #sachat and we include some information about the participants.  We didn’t include the specific information Gary asked about: gender and geographic location of participants.  But I did collect that data and although it’s for three sessions that occurred last year maybe this is still useful or helpful.  My sense is that these things haven’t changed much in the past year.

Keep in mind that these data come from three 2011 chat sessions:

Date Topic Participants Messages Average messages/participant Standard deviation messages/participant
March 10, 2011 Beyond the Conference: Networking When You Aren’t Attending a National Conference 70 442 6.3 6.5
June 2, 2011 Intentional Recruiting to the Field: Responsibilities and Liabilities 83 442 5.3 5.3
June 30, 2011 Creative Orientation Approaches and Ideas 45 323 7.2 10.2

The thing that jumps out at me in the table above are the average number of messages per participant and the standard deviation of that number.  There is immense variance in the number of messages posted by each participant and that makes me wonder about the pattern(s) of participation for each session.  The histogram below showing how many people posted a particular number of messages in each chat helps us understand these numbers (click on it to view a larger version).

This histogram is a classic “long tail” distribution, showing us that most participants in these three #sachat sessions posted very few messages and only a handful of participants posted many messages; the participant with the most messages is, of course, the moderator.  This is a very typical situation and an unsurprising finding.

This gives us a broad understanding of #sachat participation but let’s look a bit deeper and explore two different ways of classifying participants: gender and geography. First, a few words of caution: these data were inferred from the Twitter profiles and messages posted by these participants.  Geography was the easier datum to capture for each participant as most participants associated themselves with a particular college or university, either in their profile or in their introduction during one or more #sachat sessions.  Gender was much more difficult and I present these data with trepidation because there was a significant amount of guesswork involved in classifying participants as male or female.  If this were anything more than a one-off blog post or if gender were a central concern for this or any other analysis, I wouldn’t even share or use these data because inferring gender from name and photo obviously lacks rigor.

This chart shows the geographic locations of the participants in these three #sachat sessions (I used the U.S. Census geographic regions to aggregate the data).  Nothing surprising here.  #SAchat is indeed U.S.-dominated but even that isn’t a surprise.  Nothing particularly interesting is discovered if you look at the number of messages posted by participants from each region; the numbers get very small very quickly when slicing the data this many ways so it’s not worth trying to display.

 

What about gender?  For at least these three sessions, the gender breakdown seems to be about even.  Like geographic region, nothing terribly interesting happens if you slice these numbers in different ways.

So what do we make of all of this?  I think it shows that – for these three sessions – there was considerable diversity among #SAchat participants, at least in two ways we can measure. Of course, these are coarse (and in the case of gender, potentially problematic) measures and there are many other ways in which we might examine the makeup and diversity of this population.  Functional area and role (student, entry-level professional, faculty, etc.) are two measures that jump to mind as interesting and useful.  (Incidentally, I tried to classify participants using those two measures in a previous study; it was difficult, time-consuming, and very incomplete since those data are not spontaneously volunteered by all participants.)

Are #sachat participants diverse enough?  I don’t know.  How do we define “diverse enough?”  Should we be concerned about how well the #sachat population matches the larger student affairs population?  A quick glance shows some alignment between these populations but I have not done any definitive work in this area, partially because it’s very hard to obtain data about the larger student affairs population.

Of course, all of this does not and can not include anything about lurkers.  I agree that there is value in #sachat even for those who do not directly or visibly participate but we’d have to make a concerted effort to identify those people if we want to know anything about them.

I hope this is helpful or interesting!  I wish I had more up-to-date data but I don’t.  I’m job searching, working, and trying to finish a dissertation so I don’t have time or plans to gather additional data right now.  This is data that I had at hand and I am happy to share it in the hopes that it’s useful for someone.

Reflections on #sachat

Tomorrow, the members of the #sachat community will be engaging in introspection and discussing “The State of #SAchat” instead of their usual weekly discussion of topical student affairs topics.  I have been conducting research on the #sachat community for a couple of years now so I thought it might be helpful for the community if I could organize and share some of my thoughts.

I won’t spend time describing the basics of #sachat; if you are interested in this particular conversation, I assume that you are familiar with the community and its tools.  If I wrong and you are not familiar with #sachat, the official overview is here.  An annotated visualization of one chat session – a February 10, 2011 discussion about job searching – is below (my original blog post discussing this visualization has some of its background details).

The chart below shows Twitter message traffic from six hashtags – #highered, #sachat, #sadoc, #sagrad, #sajobs, and #studentaffairs – during the week of June 27, 2011.  This illustrates how #sachat differs in that it not only has consistent traffic everyday (although not as much as #highered) but it spikes during the scheduled chat session on Thursday afternoon.

In a book chapter Laura Pasquini and I have in press, we examine #sachat as a case study of informal learning using technology.  One of our conclusions is that #sachat is doing several things right to overcome the significant limitations of Twitter by:

  • Allowing participants to direct the discussions as much as practical.  For example, potential participants vote on each week’s topic and do not have to register to participate (in the voting or the actual discussion).
  • Using other tools to supplement the core use of Twitter.  Most of these tools reside on the SA Collaborative website.  One of the most important may be the chat archives that give the chats a sense of continuity and history beyond the typically ephemeral nature of Twitter.
  • Employing a well-prepared and clearly identifiable moderator in each discussion.  This account helps impose order on the Twitter chat, allowing conversation to run for a bit before drawing it back to the core topic by using clearly marked, pre-prepared questions.

We also identify several specific concerns and challenges:

  • Can the participants continue to overcome the inherent limitations of Twitter, especially its (a) short message length, (b) lack of threading, and (c) ephemerality?  Although some participants attempt to overcome the first limitation using multipart messages, this is not very successful; the 140 character limit of Twitter is one of its core features and unlikely to be overcome.  The second limitation has been addressed with some success with the use of MOD messages and Q# replies.  The third limitation has been partially overcome by regularly making transcripts of chats publicly available.
  • Is the small community of volunteers that run the chats – those who use the moderator account and the SA Collaborative website – sustainable?  These volunteers and the tools they provide and maintain are essential to the success of the community.  For how long will these volunteers sustain their energy and will there be a smooth transition as members come and go?
  • How representative of the larger student affairs community is the #sachat community?  Is that important?
  • How diverse are the members of the #sachat community?  In what ways are they diverse and in what important areas is diversity lacking?

Look Ma – I’m a Thought Leader!

Researchers at Elon University and the Pew Internet & American Life Project have released a report describing some opinions about the future of the Internet and its impact on society, particularly the younger generations.  One of the themes is that educational practices must change to address information literacy.  Although I agree with some of its content I don’t know if this report will have any impact on practice or policy; I am becoming a bit jaded and burnt out by academics discussing how education should be reformed when so little seems to actually happen (which doesn’t stop us from writing books, publishing articles, and giving talks – we must feed our egos and sustain the systems that have grown rich on this fare so they in turn will continue to sustain us!).

I am a little bit amused by The Atlantic’s description of the sampling strategy of the survey that underlies this report: “[The report is] based on surveys with more than 1,000 thought leaders.”  I was asked to participate in this survey.  That means that I must be a thought leader!  I don’t know what that means but it sounds suspiciously like “someone who doesn’t actually do anything but is really good at it!”  Now if you’ll excuse me I need to figure out where “Thought Leader” goes on my CV…

Thumbs Down for CBS News NSSE Article

There are many different angles one could take in reporting on the 2011 NSSE Annual Results; it’s a dense 50-page report. I know that every group has its own agenda and every reporter has his or her own personal interests but it’s very disappointing that CBS News chose the snide headline “Business majors: College’s worst slackers?” for their article. In an ordered list, something must be last. In this case, some major must rank last in the number of hours students typically study each week. But to label that group of students “slackers” simply because they fall at the bottom of the list is unnecessarily mean and unprofessional.

Fun Time of Year: NSSE Annual Results Released

The 2011 NSSE Annual Results were released today. I don’t want to focus on the content of the report in this blog post. Instead, I am briefly noting how fun it is to work on a project with a large impact that regularly receives attention from the press (even if some of the attention is sometimes negative, a very interesting experience itself). It’s gotten more fun each year as I’ve become more involved in much of what we do; this year I directly contributed by writing part of the report itself. Yes, it’s ego-boosting to see my work in print but more importantly it helps address a very serious and difficult problem that vexes many researchers and administrators in higher education: It’s hard to explain to others, especially our parents and extended families, what we do. Instead of trying to convince them that I really have graduated (several times!) and am not wasting my whole life in college, I can send them the report and articles from the New York Times and USA Today and say, “Look – this is what I do!”

Now I get to watch media reports and subsequent discussions to see how they play out and what they will emphasize. This process is unpredictable and it has surprised me in previous years when relatively small bits of information have caught on to the exclusion of other interesting and important information. As The Chronicle of Higher Education notes, this year may be a bit different given recent events but who knows how things will play out.

Part-time Students Are Not (Yet) The Majority

Right after I posted a screed about how some recent research about Twitter’s relationship with students’ grades has been misunderstood, along came another study that is being mischaracterized. I’m not looking for these things; I don’t want to be some kind of education research watchdog or bully. But this is important and I must speak up.

The demographics of U.S. college students are changing and too many of us are not changing our practices to match. Recently, Complete College America released a report focusing on these changing demographics with a specific focus on part-time students and the continued growth of non-traditional students. It focuses on some very important and often overlooked topics and it should elicit discussion and promote action.

Frustratingly, several of the media reports are misreporting what is in this study, particularly in their headlines and summaries. The study very explicitly says that “4 of every 10 public college students are able to attend only part-time” on its second page. So why are some reporters and commentators summarizing the report with headlines proclaiming that part-time students are the new majority? I can understand a relatively small shop making this mistake, particularly if they’re in a rush to try to get the word out about this important study and happy to make corrections. But why is the Washington Post getting it wrong and letting the error persist for days? And why are higher education professionals passing along this report with incorrect information, blindly repeating headlines and summaries that get it wrong?

(Not everyone is getting this wrong. For example, The Atlantic gets it just right.)

This is so frustrating to me because the topics discussed in this report are so important. Non-traditional students do make up the majority of students. The federal government does a poor job collecting information about these students by often focusing exclusively on first-time, first year students (which, coincidentally, was an issue I wrote about in my qualifying exam). Too many of us have tunnel vision and only focus on the students on our campus or – more accurately – the students we think are on our campus. In the context of student affairs, I worry particularly about the next generation of professionals and whether these demographic changes are being addressed in their coursework. My impression is that they are not; I hope I am wrong!