I’m Not A Programmer But Programming Skills Are Still Extremely Useful

I don't work in IT, software development, or anything even closely related to those fields so I'm often surprised at how much programming I do in my daily work life.  At times I write scripts or light programs (e.g., this set of Excel macros), usually to save time and ensure accurate, well-documented, and reproducible results.  More often, I directly use some of the skills of programming, especially flow control and abstraction, to make tasks easier, elegant, or possible.

When I first entered college in 1996, I began as a computer science major.  After a few years I changed my major because I was dissatisfied with the amount and kind of programming I was doing.  I've never looked back and I've never pined for my missed life of code slinging. I have some sympathy for movements that purport to teach programming to anyone who is interested but I don't believe that programming is an essential skill for every person in the 21st century anymore than metal fabrication was an essential skill for every person in the 20th century.

A few concrete examples may be helpful.  First, I spend quite a bit of my time analyzing quantitative data e.g., student grades and assessment data, student retention data, and survey responses.  I usually do that analysis (and much of the pre-analysis work such as data cleanup, creation of new variables, aggregating and matching of different data sets) using SPSS, a statistical analysis program commonly used in the social sciences. Although SPSS can be operated almost entirely using point-and-click menus, my real work is done using the program's programming language (called "syntax" although it's really just a scripting language).  This makes my processes (a) self-documenting and (b) replicable.  In other words, by using and saving SPSS syntax in organized ways I always know exactly what I did and I can easily make changes or corrections.

Second, I seem to use programming logic quite often when working with larger surveys. I've become quite good using some of the more advanced features of Qualtrics, the online survey tool for which we have a site license. I declare variables and pass them around between surveys and reports using Qualtrics's "embedded data" feature.  I also use some of the different features of the tool that allow me to divide a given survey into different sections and selectively display only those sections that are relevant to a particular respondent.  Combining these features is allowing us to move a key assessment process for one of our academic departments from a cumbersome series of Excel spreadsheets and Word documents that is entirely manual to a Web-based process that still requires some manual data entry but has some built-in checks for data quality and reporting tools that are largely automated.

I'm not advocating that everyone must learn how to program.  I am advocating that those who regularly work with quantitative data – assessment folks, researchers, evaluators, analysts – learn some basic programming skills including flow control, abstraction, and uses of variables.  I spent part of my life trying to actively avoid programming and I've moved completely out of IT but programming skills have proven to be extremely valuable, useful, and sometimes essential.

New NSSE Survey and Technology Questions

I’m super excited that my colleagues have finally made the new version of the National Survey of Student Engagement (NSSE)  publicly available!  We’ve spent a lot of time working on this over the past 3-4 years, including focus groups, interviews, two pilot administrations, tons of literature review and data analysis, (seemingly) thousands of meetings, and many other events and undertakings.  I’ve been incredibly lucky to have been part of this process from nearly the beginning as I’ve learned a lot about survey development and project management.  I’m leaving NSSE at the end of the month so although I won’t be here when the new survey is administered next spring I’m still happy to be here to see the final version.

I’m particularly excited that the technology module (optional set of questions) has made it through all of our testing and will be part of the new survey.  There are other cool modules but this one has been my baby for over two years.  My colleagues here at NSSE – Allison and Heather – and my colleagues at EDUCAUSE – Eden and Pam – have been wonderful collaborators and I hope that they have had half as much fun and fulfillment working on these questions as I did.  It’s poignant to have spent so much time on this project only be handing it off to others just as it sees the light of day but I know it’s in good hands.  I am very hopeful that a significant number of institutions will choose to use this module and we will continue to continue to what we know about the role and impact of technology in U.S. and Canadian higher education.

Throughout all of this, I’ve remained especially thankful to have been so involved in the development of this new survey as a graduate student. Although I work half as many hours as the full-time doctorate-possessing research analysts, they have been very open about allowing me to be involved and never shied away from adding me to projects and giving me significant responsibilities.  I was never treated as “just a grad student” or a junior colleague, just one that worked fewer hours and had some different responsibilities.  Consequently, I had genuine responsibilities and made significant, meaningful contributions; I can honestly point to the survey and see my own fingerprints on some parts of it!  When I speak about meaningful educational experiences in the future, I’ll certainly think of this one as an excellent example.  And I will work to ensure that my students and colleagues can have similar experiences that allow them to learn, grow, and meaningfully contribute by performing important work with trust and support.

Thoughts on Backward Design

 This post will be less organized than most posts; some of these thoughts and ideas are still a little raw.

Backward design – the method by which one begins with the desired end result(s) of an educational program, determines acceptable evidence showing that the result(s) has been achieved, and then creates a plan to teach the skills and content that will lead students to provide that evidence – has been on my mind lately.  It’s one of the core concepts of a college teaching and learning course I co-teach but that’s not why I’ve been thinking about it.

For me, backward design is a “threshold concept;” it’s an idea that changed how I think about teaching and I can’t go back to how I thought prior to this change.  So although I learned and most often use and teach backward design in the context of designing or redesigning a single college course, I’ve been thinking about the role of backward design in different contexts.  For example:

  • I know that backward design has been and is used to develop curricula and not just individual courses.  Today was the first time I got to see firsthand how that plays out with a group of faculty to develop a full 4-year curriculum for this discipline.  I was most struck by how difficult it was to keep true to the backward design philosophy and not get mired down in content coverage and the limitations imposed by the current curriculum.  It was difficult even for me to remain on course as I tried to help facilitate one of the groups of faculty engaged in this process.  I underestimated the increased complexities involved in scaling up the process from a single course to an entire curriculum; it’s not a linear function.
  • There has been quite a bit of discussion lately among student affairs professionals regarding their conference presentations (e.g. this Inside Higher Ed blog post with 30 comments).  Put bluntly, many people are unsatisfied with the current state of these presentations.  Just as backward design can scale up from a class to a curriculum, it can also scale down to a single class session.  And shouldn’t a good 50 minute conference presentation resemble a good 50 minute class session?  So why not systematically apply backward design to conference presentations?  Many conferences seem to try to push presenters in that direction by requiring them to have learning outcomes for their sessions but that isn’t enough.
  • Unfortunately, pedagogy and good teaching practices are not formally taught and emphasized in most student affairs programs so I expect that most student affairs professionals have not been exposed to backward design as a formal process.  That’s a shame because it seems like such a good fit for what student affairs professionals do!  And it fits in so well with the ongoing assessment movement because it so firmly anchors design in measurable outcomes and evidence-based teaching!

Would any student affairs professionals out there want to learn more about backward design and try to apply it to some of your programs?  Please let me know because I’d love to help!  I’m positive this would work out well and I’d love to test these ideas!

Quick Thought: Appropriate Metrics of Success With Social Media

(I don’t have time right now to fully develop these thoughts but I want to get them out there while they’re on my mind. I’m sure others have thought about this much more extensively and I would appreciate pointers to them!)

I don’t know if many of us yet know what measures of success to employ in our use of social media. It seems that many people still believe that coarse numbers – fans, followers, etc. – are good indicators of success. But it seems to me that those are preliminary or perhaps necessary-but-not-sufficient indicators. Of course, that assumes that people are actually trying to do things with social media, things other than simply collect an audience of questionable value (you don’t really believe that everyone who indicates an interest in your page, account, etc. is really interested and interested to the same degree as everyone else, do you?).

Although I don’t know what measures we should employ, I strongly suspect they have as much to do with social media as our objectives (i.e. very little). Most people aren’t using social media for its own sake but to accomplish some goals or effect change. So we know if we’re successful if we’ve accomplished those goals or created change, right? I don’t think we can determine that by counting fans, followers, or visitors.

Assessment in IT

A few weeks ago, I attended the 2010 ResNet Symposium in Bellingham, Washington where I was invited to present a preconference session on assessment.  I presented two identical sessions, one in the morning and one in the afternoon.  In this post I’ll reflect on what we discussed in these sessions and my perceptions of assessment in IT in American colleges and universities.

ResNet preconference session

I was invited to present these sessions by one of the conference organizers who has a strong student affairs background.  As a profession, student affairs has tried to embrace outcomes assessment so this person is familiar with the issues.  We both share a perception that IT professionals and organizations in American higher education have not yet begun to understand and perform outcomes assessment so an introductory session at the ResNet Symposium would be beneficial for attendees.  I didn’t know how well it would be received but I was pleased with the turnout: 15-16 attendees were in the two sessions, a good representation of the 101 attendees of this small conference.

At the beginning of the session, I asked the attendees to write on the whiteboard the words they associate with “assessment.”  I wanted to gather a bit of information about the attendees and their preconceived notions and I also wanted them to begin thinking about the topic.  The words they wrote most often were analysis/analyze, data, measure(ment), and evaluation.  Not a bad start.

In the first half of the session, we talked about assessment in broad, general terms.  I began by trying to provide some context for the importance of assessment, concentrating particularly on the political context and how academic and student affairs have reacted.  Next, I tried my best to introduce topics that I believe are important to understand or least know exist such as direct vs. indirect assessment and formative vs. summative assessment.  I also tried to get attendees thinking about issues and collaborating with one another by having them brainstorm in small groups to generate a list of sources of data already available on their campuses.

In the second half of the session I focused on surveys and survey development.  Not only are surveys (unfortunately) one of the most common ways of gathering data, they are also a topic in which I have some expertise.  After discussing some survey methodology concepts, primarily sources of error as identified by Dillman in many of his publications, we looked at a survey instrument I recently put into the field.  More specifically, we looked at different iterations of the survey and discussed how and why the survey changed throughout the development process.  I closed with a brief list of survey tips.

I think the session was successful in introducing some of the important concepts in assessment.  It was hard to figure out what to concentrate on during this brief session (the Assessment Framework developed by NASPA’s Assessment, Evaluation, and Research Knowledge Community was very helpful!)  and I’m still not sure that I struck the right balance between introducing important ideas and engaging the participants and meeting their expectations.  It would have been easier, I think, if I had titled the session “Outcomes Assessment” and used that phrase throughout the session; that would have provided some needed focus and better described the topic I intended to introduce.

Outcomes assessment in IT

As mentioned above, this preconference session was developed because of a shared concern about the lack of outcomes assessment in higher education IT.  We’re doing a very poor job of not only establishing how we contribute to the bottom line of our institutions (and the bottom line, of course, is the production and dissemination of knowledge) but also if we’re actually succeeding in meeting those objectives.  I believe this is fundamentally important in justifying the resources expended on in-house IT operations.  You should know why you’re doing what it is you’re doing and you should know if you’re succeeding.

Student affairs professionals realized this a decade or two ago and began emphasizing assessment both in practice and in their graduate programs.  I think that was a very smart move in that it tries to move student affairs from the periphery of the academic enterprise to a place much closer to the center, making student affairs more visible and important in many ways.  Much of IT is in the same boat that student affairs was in a few decades ago where there is an implicit belief that their services are necessary but it’s hard to explain exactly why they’re necessary and should be supplied by the institution itself.  Simply arguing that the services are “important” or even that they’re in demand doesn’t give us a license for incorporating them into our colleges and universities.  Many services are important and desirable but we’re content to contract them, outsource them, or just rely on the outside world to provide them.

We have to prove that what we do significantly contributes to the mission of our institutions and that we do it better – more effectively, more efficiently, cheaper, etc. – than anyone else.  I know that it’s hard to do that; the rest of the campus has been trying to do that for some time and they’re still struggling!  But IT has to get on board and move beyond mere measures of satisfaction and internal metrics that are uncoupled from the mission of the institution.  It’s not even about self-preservation (although that should be a motive!).  It’s about know what you’re doing, why, and if you’re getting it done.