Assessment in IT

A few weeks ago, I attended the 2010 ResNet Symposium in Bellingham, Washington where I was invited to present a preconference session on assessment.  I presented two identical sessions, one in the morning and one in the afternoon.  In this post I’ll reflect on what we discussed in these sessions and my perceptions of assessment in IT in American colleges and universities.

ResNet preconference session

I was invited to present these sessions by one of the conference organizers who has a strong student affairs background.  As a profession, student affairs has tried to embrace outcomes assessment so this person is familiar with the issues.  We both share a perception that IT professionals and organizations in American higher education have not yet begun to understand and perform outcomes assessment so an introductory session at the ResNet Symposium would be beneficial for attendees.  I didn’t know how well it would be received but I was pleased with the turnout: 15-16 attendees were in the two sessions, a good representation of the 101 attendees of this small conference.

At the beginning of the session, I asked the attendees to write on the whiteboard the words they associate with “assessment.”  I wanted to gather a bit of information about the attendees and their preconceived notions and I also wanted them to begin thinking about the topic.  The words they wrote most often were analysis/analyze, data, measure(ment), and evaluation.  Not a bad start.

In the first half of the session, we talked about assessment in broad, general terms.  I began by trying to provide some context for the importance of assessment, concentrating particularly on the political context and how academic and student affairs have reacted.  Next, I tried my best to introduce topics that I believe are important to understand or least know exist such as direct vs. indirect assessment and formative vs. summative assessment.  I also tried to get attendees thinking about issues and collaborating with one another by having them brainstorm in small groups to generate a list of sources of data already available on their campuses.

In the second half of the session I focused on surveys and survey development.  Not only are surveys (unfortunately) one of the most common ways of gathering data, they are also a topic in which I have some expertise.  After discussing some survey methodology concepts, primarily sources of error as identified by Dillman in many of his publications, we looked at a survey instrument I recently put into the field.  More specifically, we looked at different iterations of the survey and discussed how and why the survey changed throughout the development process.  I closed with a brief list of survey tips.

I think the session was successful in introducing some of the important concepts in assessment.  It was hard to figure out what to concentrate on during this brief session (the Assessment Framework developed by NASPA’s Assessment, Evaluation, and Research Knowledge Community was very helpful!)  and I’m still not sure that I struck the right balance between introducing important ideas and engaging the participants and meeting their expectations.  It would have been easier, I think, if I had titled the session “Outcomes Assessment” and used that phrase throughout the session; that would have provided some needed focus and better described the topic I intended to introduce.

Outcomes assessment in IT

As mentioned above, this preconference session was developed because of a shared concern about the lack of outcomes assessment in higher education IT.  We’re doing a very poor job of not only establishing how we contribute to the bottom line of our institutions (and the bottom line, of course, is the production and dissemination of knowledge) but also if we’re actually succeeding in meeting those objectives.  I believe this is fundamentally important in justifying the resources expended on in-house IT operations.  You should know why you’re doing what it is you’re doing and you should know if you’re succeeding.

Student affairs professionals realized this a decade or two ago and began emphasizing assessment both in practice and in their graduate programs.  I think that was a very smart move in that it tries to move student affairs from the periphery of the academic enterprise to a place much closer to the center, making student affairs more visible and important in many ways.  Much of IT is in the same boat that student affairs was in a few decades ago where there is an implicit belief that their services are necessary but it’s hard to explain exactly why they’re necessary and should be supplied by the institution itself.  Simply arguing that the services are “important” or even that they’re in demand doesn’t give us a license for incorporating them into our colleges and universities.  Many services are important and desirable but we’re content to contract them, outsource them, or just rely on the outside world to provide them.

We have to prove that what we do significantly contributes to the mission of our institutions and that we do it better – more effectively, more efficiently, cheaper, etc. – than anyone else.  I know that it’s hard to do that; the rest of the campus has been trying to do that for some time and they’re still struggling!  But IT has to get on board and move beyond mere measures of satisfaction and internal metrics that are uncoupled from the mission of the institution.  It’s not even about self-preservation (although that should be a motive!).  It’s about know what you’re doing, why, and if you’re getting it done.