The Faculty Development Metagame

One of the things I most enjoy about faculty development – consulting with faculty about their teaching and professional development – is what I have begun to think of as the “faculty development metagame.”  It’s not really a game but it’s so enjoyable that I’m reluctant to give up the word for one that is, strictly speaking, more accurate.

The metagame goes like this: In my interactions with faculty, particularly more formal and planned ones such as workshops, I usually have to find ways to explicitly model and use whatever I am discussing or presenting to form the basis of the interaction itself.  In the broadest sense, that means that I can’t continue to champion “active learning,” “student engagement,” and other ideas that are built on the idea that people learn best when they practice and receive feedback if I don’t actively model and practice those ideas myself.  In other words, I can’t tell faculty that lectures are usually a terrible way to teach students by lecturing those faculty.

In a more specific sense, this means trying to find ways to model and use the specific techniques, tools, or whatever is the topic of discussion as the framework for the interaction. The topic of discussion is used to create the discussion itself e.g., a discussion about collaborative tools takes place using tools such as Google Docs and Twitter, a workshop about flipping the classroom requires participants to have watched videos and done other preliminary work prior to the workshop (and of course some will not have done so so they – like students in their classroom – will have to figure out what to do about the group members who aren’t prepared!).  Of course, that means that the interaction will contain little actual discussion in the form of me lecturing and quite a bit of activity both on my part and on the part of the faculty with whom I am working.

I believe this metagame is necessary for at least three reasons:

  • Authenticity: My credibility is injured when I put forth ideas about teaching and learning that I do not myself practice or believe.
  • Efficiency: I am more efficient when I can save time by introducing and demonstrating something at the same time.  If I can get others involved at the same time then that’s even more time saved!
  • Effectiveness: I genuinely believe in the ideas, techniques, and tools that I try to pass along to my faculty.  Just as I believe that they will be more effective teaching their students if they use them, I believe that I am more effective teaching faculty if I use them, too.

Just as importantly, this metagame also makes faculty development very challenging and very fun!  It’s often difficult to figure out ways to employ the techniques and tools being presented in a particular consultation as the consultation itself, especially in ways that are genuine and not facile. Figuring out to meet that challenge makes my job much more interesting and fun.

Often it’s obvious when I’m modeling something and using it as the basis of a consultation or workshop.  But every once in a while I get to have a fun little moment at the end of the interaction where I tell my colleagues: “And that cool thing we’ve been talking about for the past hour? We’ve been doing it!” That makes it even more fun.

A brief example may be helpful: This fall I’ll be teaching a pedagogy class for graduate students and one specific technique they’ll learn is a technique called a “concept lesson.”  Two of the key elements of a concept lesson are a solid metaphor – not an example – for the idea on which you’re focusing and an actual example.  I will, of course, use a concept lesson to teach about concept lessons so I will need an appropriate metaphor and example.  I will teach a concept lesson about concept lessons; that is the metagame.

It’s not always possible to do play the metagame.  There are some tools or techniques that simply take too much time to play out or are so situation- or discipline-specific that they can’t be realistically employed in an artificial setting.  That can make them a tough sell and that’s when it’s incredibly helpful to have (a) others who have experience with it to provide examples and testimony, (b) video, or (c) other artifacts that make the idea real and concrete instead of just an abstract discussion.

 

Self-regulated Learning and Age in a Hybrid Course

Earlier this spring, I worked with a wonderful faculty member to conduct research into a new hybrid version of an introductory Spanish course at our university.  He changed some sections of a 4-credit course that typically meets four days each week to so that they only met two days each week with a substantial increase in online activity.  I presented a paper on this research at the recent AIR conference with the basic questions: (a) Did students learn more or less in these hybrid sections? and (b) Did students who were more motivated or exhibited better study skills – measured using the Motivated Strategies for Learning Questionnaire (MSLQ) – learn more?

The full details are in the paper but it appears that the answers to our questions are:

  1. Students didn’t learn any more or less in the hybrid sections.  This is consistent with the larger body of research that has found “no significant difference” between courses taught using different media.  In fact, this is good news in some ways since we can implement more hybrid sections and courses with some confidence that student learning won’t be negatively impacted.  This is particularly beneficial for us as these small four-credit courses require a lot of classroom space.
  2. The impact of self-regulated learning is unclear.  Of the three outcome measures included in this study, performance on the MSLQ was only partially related to two outcomes.  This is contrary to our expectations as it seems reasonable that students who are motivated and use better study skills would learn more.

To me, the most interesting part of this study is the role of age in predicting student learning. We created several multiple regression models and age was a negative predictor of student grades but a positive predictor of improved proficiency in reading Spanish. In other words, after we accounted for things such as race/ethnicity and gender, older students tended to earn lower grades but they also seemed to learn more about reading Spanish (but not about listening to Spanish).  So older students have learned how to study more effectively and are more motivated to learn, right?  No, at least not according to the MSLQ results: Age was not significantly correlated with the MSLQ results.

In addition to the quantitative measures used in this study, we also interviewed several students.  At the same time, we also repeatedly interviewed students in some math courses that were also being modified – “flipped” – during the same semester.  We were consistently impressed with the older students in our interview sessions and very much enjoyed their maturity and self-reflection.  That suggests an interesting hypotheses: Were the older students in this study were simply less concerned with grades and more concerned about learning?

Student Engagement Infographic

Infographic of University of Delaware senior student participation in selected high-impact practices from NSSE 2011

Infographic of University of Delaware senior student participation in selected high-impact practices from NSSE 2011. This image links to the full-size graphic.

Last week, my colleagues and I presented the final UD First Friday Roundtable on Teaching of this semester.  We focused on “student engagement,” specifically naming the session “What Does an Engaged UD Student Look Like?”  It was a good session with lots of great discussion but right now I want to narrowly and briefly focus on two graphics that we whipped up for our supporting materials.

The first image in which you might be interested is a simple infographic showing University of Delaware student participation in some high-impact practices as extrapolated from NSSE 2011 responses.  The image to the right shows one part of the entire image to give you an idea what it looks like. This extract from the full-size image links to the a larger version of the full infographic; some of the text is too small to read even in that large image so you can also download the full-size pdf. It worked out quite well as a full-size poster and I also modified it to work as a handout for attendees.  It’s not bad for the amount of time I had to put into it although I would have liked to have done a lot more and a lot better.

For each of the six selected high-impact practices, I included not only the overall percentage of senior students who reported participating in them but also the subgroups for which there were significant (p ≤ .05) differences.  I looked at differences between students of different genders, white and non-white students, students in STEM and non-STEM disciplines, and first-generation and non-first-generation students.  If I had more time, I would have loved to have created another set of graphics illustrating the impact of these practices or some broader measure of student engagement on self-reported GPA and gains especially if these data showed what the national data tend to show is that these activities or engagement overall sometimes has more impact on different kinds of students.

2013-05-03 Engagement Roundtable word cloud The second image is a simple word cloud we used on some of our materials such as the agenda and signs.  I know that word clouds are passé (or maybe I’m the only one who thinks so) but this was a really simple and quick image for us to create.  Just as important, it was closely tied to the topic of the event as we used the text of one of the primary resources – Kuh’s 2008 AAC&U high-impact practices publication* -  we used to develop and think about the event as the input for the word cloud generated using Tagxedo.

 

* Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. Washington, DC: AAC&U.

Are High Impact Practices Available Online?

I am still wrestling with my unease with MOOCs and I think I’ve finally figured out why: High impact educational practices, as we understand them today, are unlikely at best and impossible at worst in MOOCs and other similar online environments.

First, it’s helpful to understand that “high impact practice” (HIP) is a term of art.  Although the phrase is probably very common, in the past ten years or so the term has taken on special significance in U.S. higher education.  Popularized by George Kuh and emerging partly from research using data from the National Survey of Student Engagement (NSSE), this phrase has come to mean a particular set of activities that many higher education researchers believe are especially effective in promoting important and lasting changes in undergraduate students: First-Year Seminars and Experiences, Common Intellectual Experiences (i.e. core curricula), Learning Communities, Writing-Intensive Courses, Collaborative Assignments and Projects, Undergraduate Research, Diversity/Global Learning, Service Learning, Community-Based Learning, Internships, and Capstone Courses and Projects.

Unfortunately, we sometimes place too much focus on these particular activities without understanding why these activities have a high impact.  As originally described by Kuh in 2007, these practices share six characteristics:

  1. HIPs “demand that students devote considerable amounts of time and effort to purposeful tasks (p. 7)”
  2. HIPs place students in circumstances that require they “interact with faculty and peers about substantive matter (p. 7)”
  3. HIPs greatly increase the likelihood that students will interact with people who are different from themselves
  4. HIPs provide students with very frequent – sometimes continuous – feedback from faculty and peers
  5. HIPs require students to operate in intellectually complex ways by connecting knowledge in different courses and applying it in different contexts e.g. confronting complex real-world issues, investigating unfamiliar research problems
  6. HIPs occur in the context of a “coherent, academically challenging curriculum (p. 8)”

I am particularly interested in focusing on these characteristics of high impact practices as I will be helping lead a discussion on my campus next month focused on student engagement.  Most of the participants will be faculty and much of our focus will be on activities that faculty are using or can use in their curricula to promote student engagement.  Given that focus, I don’t think it would be helpful to focus on the specific activities identified as HIPs as those are often beyond the resources and purview of an individual faculty member.  Instead, we will focus on why those activities have a high impact so we can apply those principles to the activities within the power and resources of individual faculty.

That is what was on the forefront of my mind when I “attended” an EDUCAUSE Learning Initiative (ELI) online conference last week that focused on MOOCs.  The conference had some very active discussions among participants and as I participated in those discussions it occurred to me that one of the primary reasons I am uncomfortable with MOOCs is that it is difficult or impossible to apply much of what we know about good teaching in that environment.

Look back up at those six principles of high impact practices.  How do we do apply those principles in a MOOC?  More pointedly, can we apply those principles in a MOOC?  I despair that the answer is mostly “no.”  I pray that it is a simple lack of imagination on my part, a misunderstanding of what we can do in a MOOC, or that this is a fatal flaw of the dominant MOOC model that others will quickly recognize and fix or use to abandon that model.  I also confess that I don’t completely understand all of the discussions about “xMOOCs” and “cMOOCs” on anything but a very theoretical and abstract level and I have a sneaky suspicion that I’m missing something very important in how cMOOCs address some of these principles.

There is another interesting and hopeful way to think about this.  Another ELI conference attendee – I’m sorry that I don’t remember who – suggested that there may be other paradigms of effective educational practices that MOOCs might better fit.  Although I am a little bit skeptical that our understanding of effective education is going to be radically upended, this recommendation to not be too constrained by our current thinking is a very good one.  In fact, that is one important reason why I will be trying to steer our discussion here on my campus next month away from the specific activities and toward the broader principles so we can compare our thinking about student engagement with that of others’.  The idea isn’t to impose the model on my campus but to use it as a common starting point that must be adapted to our unique needs and resources.

That, of course, is what we’ll need to do with MOOCs: Use our best understanding of effective teaching and shape it to this unique environment with unique affordances.  I don’t know how to do that and I don’t know if that is what is being done.  I am wary that much of what is being done is not methodical and not built on what we know about how people learn.  I am especially skeptical that we can provide the kind of demanding and socially and intellectually connected experiences that we know provide some of the best learning.  I hope that people smarter than I are figuring this out, though, and working out how MOOCs can provide high impact educational practices.

Advantages and Challenges of Tech Skills in a Non-Tech Job

Posterboard showing a draft sitemap for part of a new website

The initial planning stages of our new website.

When I was on the job market last year, I was not interested in a job in IT.  Although I’m a little bit rusty in some areas after spending five years in a PhD program, I have the skills and background to work in IT as I’ve done in previous jobs.  But I was interested primarily in faculty development and assessment jobs.  For many of those positions, I deliberately downplayed my technology background because I wanted to be interviewed and hired because for my research and assessment skills.  I seem to have been successful because I’ve found a wonderful research analyst position in a teaching and assessment shop where my none of my job responsibilities involve technology maintenance, development, management, or support.

That has made the last couple of weeks a little bit strange as I’ve taken on significant responsibility in planning and supporting my center’s website as we finish merging our old websites and move them to a new content management system. This is the kind of project I was trying to avoid because it’s completely outside my current job description and I fear being typecast in this role. But you know what? It’s working out just fine. I am able to greatly help my colleagues and they are supportive and grateful but I don’t fear being relegated to being the unit’s tech support.

This is possible because of two things. First, I have wonderful coworkers who are very supportive. They are supportive of my growth as a professional as a higher education scholar, researcher, and faculty developer. They are also very respectful about how my time is used and the kinds of tasks they ask me to take on. Second, I have been very open about setting boundaries. I am always happy to help my colleagues wth technical issues but they are also okay with going through regular channels for larger, more complex issues even though I could spend time solving many of those, too. Because I have been very open with my colleagues about how I would like to use my experiences and knowledge and because my colleagues are wonderful, they have even shielded me a bit to ensure that others don’t try to take advantage of me. For example, despite our official tech support warming up to me very quickly (e.g., it only took a few minutes of me asking the right questions to be granted local admin access to my computer, something that apparently is a rarity here) my colleagues have intentionally made someone else our unit’s official point of contact with tech support to ensure they would not rely on me to solve problems or do extra work.

This seems to be working out well. My background gives me the knowledge to ask many of the right questions when we’re dealing with technology e.g., I can “geek out” with a colleague in IT to more effectively and efficiently probe for information as we figure out which content management system to use for our new website. My skills let me solve little problems in our office very quickly or recognize when problems are out of our control or even unsolvable. My experience guides me to help my colleagues make wise decisions that will be maintainable into the future even when I no longer work here e.g., we’re going to move to Qualtrics as our event registration system instead of using custom-built Google Forms that require significant technical skill to effectively use and maintain.

You can take the computer geek out of IT but apparently you can’t take the IT out of the computer geek. And I’m becoming okay with that.

Data Analysis MOOC Week 3: I’m a Dropout

Despite my best intentions, I have become another MOOC dropout.  Why am I not continuing to participate in this free course?

  1. The format isn’t compelling.  The course is primarily built around four components: video lectures and notes, weekly quizzes, a discussion board, and two peer-graded assignments. The lectures are alright and although there are many other online R resources it’s nice to have concise descriptions of R procedures specifically linked to data analysis. The discussion board is also helpful but there are many other places to find help with R. As discussed in my previous post, the weekly quizzes are very disappointing as they are the primary means by which students in this course practice what they learn but they offer very, very little feedback.My biggest regret is that I won’t experience the peer-graded  assignments. While the idea of requiring students to grade one another’s work is likely driven largely by the logistics of a MOOC, peer-graded assignments can be very powerful and worthwhile even in small classes.  That these assignments are the only non-quiz activities in the course is disappointing especially since there are only two non-quiz assignments.  Although it will be helpful that each student should receive feedback from several classmates (if it’s possible, I might provide feedback on the reports for some of my classmates even though I won’t be writing my own), it often takes more than two attempts for students to learn and begin to master new skills.
  2. Except for the peer-graded reports, there seems to be little reason for this course to be on a lockstep 8 week schedule. I might be able to stay with it if the timing were more flexible.  Even in the first three weeks of the course I’m having some trouble consistently making time to view all of the videos. I had planned to do this all at work as my supervisor supports this as important and valuable professional development but I’m having trouble doing that because it’s sometimes difficult to carve out the time and I feel guilty watching online videos at work for a non-credit course when I feel like I should be doing something more (visibly and authentically) productive.
  3. I can’t convince myself to participate in the two peer-graded reports, the only meaningful assignments in this course.  This is linked directly to the material of this specific course and is not a criticism of the course itself. I simply can’t muster the will to conduct additional data analysis and write additional reports for this course when those are two of my primary job duties.  It’s not that I don’t think that I could learn from the activities, develop new skills, and become a better data analyst and writer.  I just can’t bring myself to spend so much time analyzing data and writing reports unrelated to either my job or my research.  I am disappointed as I was looking forward to these substantive activities, especially being able to receive feedback from others and seeing how others approached the same activities.

Although I’m disappointed to have decided to not continue with the activities of this MOOC, I am happy to have enrolled and tried it out.  I will continue to download the course materials so I can reference them when I am ready to put them into practice in meaningful ways.

I have very mixed feelings about the broader concept of MOOCs.  It would take an extraordinary effort for an online course, especially a MOOC, to match the quality of the best face-to-face courses.  But the reality is that few face-to-face courses are “the best.”  Although the dominant MOOC model seems to mimic much of the worst lecture courses in traditional universities, even the worst course is sometimes good enough especially when the alternative to a crappy, frustrating, and largely self-driven education is no education at all.

Data Analysis MOOC Week 2: Muddling Through Frustration

I have watched the online videos and successfully completed the quiz for week 2 of the data analysis MOOC in which I am enrolled. I struggled quite a bit with some of the R syntax and that made the quiz a very frustrating experience. I have two observations to share about what I learned this week about the format of the course.

First, I am disappointed that so far the only opportunities for students to practice what is being taught and receive feedback is the weekly quiz.  I was able to muddle through things enough to get answers that matched the response options for this week’s multiple-choice quiz but despite answering all questions correctly I’m still very unsure of much of the content – I just know that I happened to somehow end up with answers that matched some of the ones included in the quiz.  Some of this is simply due to my lack of experience with R and its high learning curve.  But much of it is due to the fact that the multiple-choice quiz was the only opportunity to practice with any semblance of feedback and that feedback was restricted to an anemic “correct” or “incorrect” for each question with no additional feedback.

Yes, I can practice on my own some of the skills taught in this class.  This is certainly the case if I want to focus solely on learning how to use R – syntax, configuration, functionality, etc. – as the language provides immediate feedback with error messages or output.  But if that is the focus and if that’s sufficient to learn the skills then why do we need an organized course instead just a course packet or list of recommended self-guided topics and exercises?

What distinguishes an organized, well-taught class from a self-taught topic is that a class has an expert who not only make their thinking explicit but also offers targeted feedback for students as they practice the skills they are learning.  It’s conceivable that some skills could be taught using sophisticated, automated tools if we have a deep enough understanding of how people typically learn those skills that we can programmatically recognize mistakes and misunderstandings to provide appropriate, specific feedback.  Sometimes, this can be done to a (very) limited degree with appropriately designed multiple-choice instruments where the incorrect responses are designed to be diagnostic i.e., wrong answers aren’t merely incorrect but they’re designed to identify particular kinds of mistakes or misunderstandings.  That seems to be the case for some of the questions and answers in this MOOC but we’re not provided with any of the related feedback to help us understand what common mistake we may have made, how we might be misunderstanding the issue, and how we can work to correct our thinking.

Second, the size of the course requires innovative ways to provide support for students and this course seems to rely heavily on the course discussion board.  This is an observation, not a criticism. I’m quite comfortable using that medium as I’ve been using online discussion boards since the early 1990s when they were one of the primary draws for dial-up bulletin board systems (with the other major draw being online “door” games).  I don’t know how well this works for other students, however, as I don’t want to make assumptions about their experiences, skills, and cultures.  It’s probably not a big deal; my concern here is very minor and more of a curiosity about how other students experience and use (or don’t use) the discussion board. (In other situations I would be concerned about those who have poor or no Internet access or those who have little comfort and experience with the Internet but it’s reasonable to expect students who enroll in an online course to have sufficient Internet access and skills. I’m not suggesting that everyone has the access and skills to enroll in an online course, merely that those who are already enrolled in one presumably have the required access and skills.)

Students Believe Their Computer Skills Are Below Average

Our colleagues at the Higher Education Research Institute (HERI) at UCLA have publicly released some information from their annual survey of first-year students.  There are already several media reports on the topic and we can expect many more to come out over the next few days.  What caught my eye is that they shared some of their data with The Chronicle of Higher Education who created an interactive graphic showing how student responses have (or have not) changed over time.

Several of the questions on the survey ask students to compare themselves “with the average person your age” and “rate yourself above average or better in terms of __.”  For nearly all of the questions of this form, students have consistently rated themselves as above average: ability to see the world from someone else’s perspective, tolerance of others with different beliefs, openness to having their own views challenged, ability to discuss and negotiate controversial issues, ability to work cooperatively with diverse people, academic ability, emotional health, and physical health.

So for what topics do respondents believe they are below average?  Computer skills, spirituality, and writing ability.  I don’t care to comment on spirituality (a commenter on the Chronicle’s website asks a good question: “What on earth does [that question] mean?”).  I’m puzzled that first-year college students believe they are below average in writing ability but I’m not an expert on writing so I’ll leave that puzzle to others.

What does it mean that 35% of the respondents to the survey rate themselves below average in computer skills?  And what does it mean that students have consistently responded like this since the question was first asked in 1999?  Well, to know for sure we’d have to ask them.  I would want to know how they interpret “computer skills.”  What do they consider to be computer skills?  How are they measuring their computer skills?  And to whom are they comparing themselves?  Heck, given the proliferation of smart phones and tablets it would be a good idea to ask students (and ourselves!) just what they think of as a “computer.”

One possible factor in all of this may be related to the gender imbalance in undergraduate education in the U.S.  More women than men are enrolled in U.S. colleges and universities.  According to the most recent data published by the National Clearinghouse Research Center, 56% of the students enrolled in the fall of 2012 were women.  Why is this important?  We know that women typically underestimate their computer skills whereas men typically overestimate their skills.  If the data reported by the Chronicle are unweighted then this may have an even larger impact on the data because women typically respond to surveys in higher proportions.

(Aside: The National Clearinghouse Research Center is doing some incredibly cool and vital research these days. They have a huge warehouse of data about college enrollment and it’s great to see them putting it all to use! Check out what they’re doing – it’s good stuff.)

In any case, it’s interesting that most undergraduates at 4-year institutions believe their computer skills are below average.  I doubt that it’s actually true but I would certainly agree that they are nowhere near as proficient as some of the common assumptions (e.g., “digital natives”) make them out to be.  Is this a problem?  Should we be worried or looking for a solution?  That’s a different and more complex discussion but I think it’s safe to say that first-year college students are precisely as proficient as they have needed to be given how they use computers in their daily lives – just like everyone else.  They don’t typically use their computers to perform complicated or deeply technical tasks so why would we expect them to be profoundly tech savvy?

Data Analysis MOOC Week 1: I’m Going to Hate This

This semester, I have signed up for a data analysis class being taught in Coursera. This is a massively open online course (MOOC).  I’m tech savvy and well educated but it seems like the most responsible way for me to really learn about MOOCs is to gain some firsthand experience.  I also hope to learn some new data analysis techniques and ideas in this course.  The course will use R to analyze data so it will also be good to expand my (very limited) skills and knowledge with that powerful tool.

Going into this, I am very skeptical about what I understand the typical MOOC model to be with instruction primarily occurring using pre-recorded videos and quizzes with a discussion board as the primary means of communication between students and faculty.  I hope I’m wrong either about the model of instruction or about its effectiveness.  As an educator, I believe (and am supported by significant evidence) that the best learning occurs when experts make their thinking explicit through demonstration and give learners multiple opportunities for focused practice and feedback.  So my skepticism about the effectiveness of videos and quizzes as learning and teaching tools can best be summed up as: “Telling is not teaching.”  (Note that this applies just as forcefully to passive lecturing in physical classrooms!)

I’ve just started to get into the material for this course and so far it looks like my low expectations are going to be met: the course is built heavily around pre-recorded videos as the way for the faculty to teach students with weekly online quizzes and two peer-graded assignments as the only opportunities for us to “practice” what we are “learning.”  I hope I’m wrong and this proves to be much more enjoyable and rewarding that I think it will be!

Venues for Publishing Student Affairs Technology Research

One of my colleagues recently made an offhand remark about the timeliness of an article in the current issue of The Journal of College Student Development.  Rather than focus on the comment or the specific article, however, it seems more productive to explore appropriate and timely venues for publishing similar work in a more timely manner.

The problem?  Much of the research that we conduct about technology must be shared and disseminated quickly to keep up with the rapid pace with which technologies and their uses change.  Many of the traditional venues for publication and dissemination of research have huge lag times, sometimes a few years long; this is particularly problematic for some technology-related research that grows out-of-date much quicker than many other bodies of information.  I have research that I have conducted that has grown out-of-date before I could get it published in peer-reviewed journals e.g., work conducted with my colleague Chris Medrano examining content in Wikipedia articles about U.S. colleges and universities.  I have had data – really good data about interesting stuff! – grow stale over the course of a very busy year-and-a-half such that I could not work with it (I could have worked with it and it was such cool stuff that I’m sure that it would have been published somewhere but I would have felt horrible and a little bit ashamed about it!).

Although I have moved out of student affairs, I continue to do work about student and faculty use of technology so this is still an issue that is important to me.  I’d like your help in thinking about how we get our work out there.  Here are some of my thoughts:

  • Does the publication or release need to be through a traditional, peer-reviewed venue?  Even for those of us who believe ourselves to be locked into the traditional academic world where peer-reviewed publications remain the gold standard, I think the answer is “no.”  It might be acceptable to blog about your findings or present them in non-traditional conferences, especially if those venues allow you better reach your intended audience (e.g. how many full-time student affairs professionals regularly pore over peer-reviewed journals?).
  • For those who do believe in the necessity or value of publishing or presenting in traditional venues, which ones allow us to disseminate our findings in a timely manner?  My initial reaction to the comment that began this entire line of questioning is that JCSD is a fine venue but it moves too slowly to publish much of the technology-related research I have conducted.  In fact, most of the peer-reviewed journals in higher education move too slowly for me to consider them viable venues for publication of timely technology-related research.

Maybe it would be helpful if we can compile a list of good venues for student affairs technology research.  (Although I’m mostly out of that field now, I still do some work in it and my experiences are significant enough that I think I can help.)  My suggestions, in no particular order:

  • First Monday: Online, peer-reviewed journal that focuses on Internet studies.  They have published higher education-specific work in the past so they seem open to the topic.  It’s also a respected venue for scholarly work.  Very importantly, I understand that they review submitted articles very quickly.
  • Journal of Computer-Mediated Communication (JCMC): Peer-reviewed journal with an obvious focus.  Like First Monday, they have published work in our field.  It’s also the most respected venue that is usually on my radar screen for timely publication of relevant work.
  • The Journal of Technology in Student Affairs: Another peer-reviewed journal with an obvious focus.  Although this is a viable venue, it’s probably not one that I would submit to as my first choice.  It’s a fine publication but it simply doesn’t have a strong, high-profile reputation.  That may sound very crass but the reality of scholarly publishing is that it’s important to publish in the most highly regarded journals possible.
  • EDUCAUSE Review Online (ERO): Although ERO publishes some peer-reviewed work, it largely exists outside the traditional world of scholarly research because the publication is aimed at higher education IT practitioners.  With that said, it has historically been a very good venue for work that is intended for that audience although I haven’t published in it since they changed their format (EDUCAUSE used to have a monthly magazine and a quarterly peer-reviewed journal; they’ve been merged into one publication, ERO).

Outside of formal publications, several conferences are good venues to present and discuss this kind of work. I personally like EDUCAUSE events quite a bit but the audience that is interested in student affairs-specific work is pretty small.  The EDUCAUSE Learning Initiative (ELI), the arm of EDUCAUSE that focuses on teaching and learning, also puts on really nice conferences with wonderful participants if your work is more oriented towards teaching and learning.  I have also presented at other higher education conferences such as the annual conferences for ASHE, AERA, and AIR.  They are large conferences and quite frankly I don’t care for them very much because (a) they lack focus and (b) I have difficulty believing that anything that happens at them impacts the world beyond being another line on my CV.  AIR is a bit better, though, because it does have some focus and much of the work discussed there has real-world implications and impact largely because of the strong presence of institutional research professionals.

The student affairs conferences are certainly viable venues, particularly the recent ones that have begun cropping up that focus specifically on technology e.g., #NASPATech, #satechBOS.  I have drifted away from student affairs conference over the past several years, though, so I will let others with more recent experience offer their opinions and evaluations.

If you find this kind of brainstorming helpful or interesting, feel free to add your thoughts below.  If enough people are interested, this would make for a good shared project to throw into a publicly-accessible editing environment like a Google doc.