Data Analysis MOOC Week 1: I’m Going to Hate This

This semester, I have signed up for a data analysis class being taught in Coursera. This is a massively open online course (MOOC).  I’m tech savvy and well educated but it seems like the most responsible way for me to really learn about MOOCs is to gain some firsthand experience.  I also hope to learn some new data analysis techniques and ideas in this course.  The course will use R to analyze data so it will also be good to expand my (very limited) skills and knowledge with that powerful tool.

Going into this, I am very skeptical about what I understand the typical MOOC model to be with instruction primarily occurring using pre-recorded videos and quizzes with a discussion board as the primary means of communication between students and faculty.  I hope I’m wrong either about the model of instruction or about its effectiveness.  As an educator, I believe (and am supported by significant evidence) that the best learning occurs when experts make their thinking explicit through demonstration and give learners multiple opportunities for focused practice and feedback.  So my skepticism about the effectiveness of videos and quizzes as learning and teaching tools can best be summed up as: “Telling is not teaching.”  (Note that this applies just as forcefully to passive lecturing in physical classrooms!)

I’ve just started to get into the material for this course and so far it looks like my low expectations are going to be met: the course is built heavily around pre-recorded videos as the way for the faculty to teach students with weekly online quizzes and two peer-graded assignments as the only opportunities for us to “practice” what we are “learning.”  I hope I’m wrong and this proves to be much more enjoyable and rewarding that I think it will be!

Venues for Publishing Student Affairs Technology Research

One of my colleagues recently made an offhand remark about the timeliness of an article in the current issue of The Journal of College Student Development.  Rather than focus on the comment or the specific article, however, it seems more productive to explore appropriate and timely venues for publishing similar work in a more timely manner.

The problem?  Much of the research that we conduct about technology must be shared and disseminated quickly to keep up with the rapid pace with which technologies and their uses change.  Many of the traditional venues for publication and dissemination of research have huge lag times, sometimes a few years long; this is particularly problematic for some technology-related research that grows out-of-date much quicker than many other bodies of information.  I have research that I have conducted that has grown out-of-date before I could get it published in peer-reviewed journals e.g., work conducted with my colleague Chris Medrano examining content in Wikipedia articles about U.S. colleges and universities.  I have had data – really good data about interesting stuff! – grow stale over the course of a very busy year-and-a-half such that I could not work with it (I could have worked with it and it was such cool stuff that I’m sure that it would have been published somewhere but I would have felt horrible and a little bit ashamed about it!).

Although I have moved out of student affairs, I continue to do work about student and faculty use of technology so this is still an issue that is important to me.  I’d like your help in thinking about how we get our work out there.  Here are some of my thoughts:

  • Does the publication or release need to be through a traditional, peer-reviewed venue?  Even for those of us who believe ourselves to be locked into the traditional academic world where peer-reviewed publications remain the gold standard, I think the answer is “no.”  It might be acceptable to blog about your findings or present them in non-traditional conferences, especially if those venues allow you better reach your intended audience (e.g. how many full-time student affairs professionals regularly pore over peer-reviewed journals?).
  • For those who do believe in the necessity or value of publishing or presenting in traditional venues, which ones allow us to disseminate our findings in a timely manner?  My initial reaction to the comment that began this entire line of questioning is that JCSD is a fine venue but it moves too slowly to publish much of the technology-related research I have conducted.  In fact, most of the peer-reviewed journals in higher education move too slowly for me to consider them viable venues for publication of timely technology-related research.

Maybe it would be helpful if we can compile a list of good venues for student affairs technology research.  (Although I’m mostly out of that field now, I still do some work in it and my experiences are significant enough that I think I can help.)  My suggestions, in no particular order:

  • First Monday: Online, peer-reviewed journal that focuses on Internet studies.  They have published higher education-specific work in the past so they seem open to the topic.  It’s also a respected venue for scholarly work.  Very importantly, I understand that they review submitted articles very quickly.
  • Journal of Computer-Mediated Communication (JCMC): Peer-reviewed journal with an obvious focus.  Like First Monday, they have published work in our field.  It’s also the most respected venue that is usually on my radar screen for timely publication of relevant work.
  • The Journal of Technology in Student Affairs: Another peer-reviewed journal with an obvious focus.  Although this is a viable venue, it’s probably not one that I would submit to as my first choice.  It’s a fine publication but it simply doesn’t have a strong, high-profile reputation.  That may sound very crass but the reality of scholarly publishing is that it’s important to publish in the most highly regarded journals possible.
  • EDUCAUSE Review Online (ERO): Although ERO publishes some peer-reviewed work, it largely exists outside the traditional world of scholarly research because the publication is aimed at higher education IT practitioners.  With that said, it has historically been a very good venue for work that is intended for that audience although I haven’t published in it since they changed their format (EDUCAUSE used to have a monthly magazine and a quarterly peer-reviewed journal; they’ve been merged into one publication, ERO).

Outside of formal publications, several conferences are good venues to present and discuss this kind of work. I personally like EDUCAUSE events quite a bit but the audience that is interested in student affairs-specific work is pretty small.  The EDUCAUSE Learning Initiative (ELI), the arm of EDUCAUSE that focuses on teaching and learning, also puts on really nice conferences with wonderful participants if your work is more oriented towards teaching and learning.  I have also presented at other higher education conferences such as the annual conferences for ASHE, AERA, and AIR.  They are large conferences and quite frankly I don’t care for them very much because (a) they lack focus and (b) I have difficulty believing that anything that happens at them impacts the world beyond being another line on my CV.  AIR is a bit better, though, because it does have some focus and much of the work discussed there has real-world implications and impact largely because of the strong presence of institutional research professionals.

The student affairs conferences are certainly viable venues, particularly the recent ones that have begun cropping up that focus specifically on technology e.g., #NASPATech, #satechBOS.  I have drifted away from student affairs conference over the past several years, though, so I will let others with more recent experience offer their opinions and evaluations.

If you find this kind of brainstorming helpful or interesting, feel free to add your thoughts below.  If enough people are interested, this would make for a good shared project to throw into a publicly-accessible editing environment like a Google doc.

Inserting Unique Survey IDs into Multipage Paper Surveys

I still believe in paper surveys.  I believe that their immediacy and accessibility makes them very well-suited for some situations.  Although I value technology-based surveys (e.g. Web-based, tablet-based) I definitely believe that there are times when paper surveys are superior.

You can imagine that I was very happy when my new employer approved the purchase of (a) a printer with an automatic duplex scanner and (b) an installation of Remark Office OMR 8.  These two tools together will allow us to conduct paper surveys with some level of ease, automation, and accuracy.  I’m particularly happy that this will allow us to break free from the tyranny of Scantron by allowing us to create customized survey instruments that don’t rely on generic Scantron answer forms.

Now that I am learning how to use Remark Office OMR 8 I am figuring out all of those little things that I was previously able to count on other people to do, often without even knowing that it was being done.  Most recently, I had to figure out how to add unique survey IDs on a multipage survey.  Let me break it down for you:

I have a survey that is six pages long.  On each page, I have the page number and I can tell Remark Office where that page number is so I don’t have to worry about keeping pages in order.  But I also need some way to link all of those pages together when I am scanning multiple surveys so the correct six pages are grouped together in the resulting data file.  Hence I need to add a unique survey ID to each page of each survey.  Adding page numbers is easy but how do I add survey IDs?

I had to do this for my dissertation instrument but that was a one-page instrument so this was a simpler process.  The multipage process took me a few hours to figure it out and here is what I have settled on for now:

  1. Create the survey instrument.  I did this in Microsoft Publisher because it was the desktop publishing tool I had at hand.  I suppose you could use Word or something similar but it won’t give you near as much control over the layout.
  2. Print or save the survey as a pdf.
  3. Use that pdf to create another pdf with multiple copies of the survey instrument.  Right now, this is the clunkiest part of this process as I haven’t yet figured out how to directly print multiple copies of the instrument as a pdf.  Instead, I have to save multiple copies and merge them together.  It’s not entirely horrible as the merges geometrically multiply so it quickly becomes easy to make a single pdf file with many, many copies of the survey instrument.
  4. Create a simple Excel spreadsheet with the sequence of survey IDs.  My survey instrument has six pages so I end up with one column of numbers where each number is repeated six times before being incremented to the next one.  This spreadsheet is used in a mailmerge so I suppose this could easily be done as a comma-separated file or in some other program that produces similar output.  It’s important that the number of survey IDs match the number of surveys in your pdf.
  5. Create a simple Word document whose only text is a merge field that will insert the survey IDs into the document.
  6. Merge the Word document and save or print the resulting file as another pdf.  You now have two pdf files with the same number of pages; one has survey instruments and the other has survey IDs.
  7. Use pdftk to add the survey ID pdf as a background to the survey instrument pdf.  pdftk is a simple command line tool that lets you manipulate pdfs.  It’s freely available for many platforms, including Windows.  I used the “multibackground” parameter to essentially merge these two pdfs into one, adding the survey IDs to the survey instruments.  I got lucky in that my survey IDs were well-aligned with my survey instrument but you might have to modify one or both of your documents to get the survey ID to end up where you want it.

Now that I have unique survey IDs for each survey and page numbers on each page, I can feed the surveys into the scanner in any order I want and everything will work!  I just have to ensure that they’re all right-side up because I don’t know how well Office Remark OMR 8 can detect and correct for upside down instruments (it’s a feature of the software but I’ll have to test it; if this were a real concern I’d be looking into possible solutions such as cutting off or rounding one of the corners but I’ll be working with small enough batches that it will be easier just to flip through the completed instruments).

Dissertation Journal: Less Time and More Pressure Makes Kevin a Productive Boy

Although I have not finished my dissertation, I began a full-time job a little over a month ago.  I know that this is a dangerous move and that many people who leave school before completing their dissertation never complete it.  I also know that even in the best circumstances this will delay my progress.  This is a move motivated by the reality of five years of graduate student pay and loans, however, not by academic concerns.

So far this is working out well.  For over a year, I was stalled and made no progress at all.  I was paralyzed by indecision and fear and always eager to find other interesting and worthwhile projects.  I was also very good at dodging or redirecting questions from friends and colleagues.  But I knew that I wouldn’t be able to dodge questions from potential employers so I had to buckle down and get back on task – I didn’t have a choice.  Backing myself into a corner seems to have been the right choice as it forced me into action.

As I entered the job market, I began writing again so I could honestly tell interviewers that I was making substantive progress.  Even then I wasn’t writing as much and as often as I should have been doing.  Once I had a job offer, however, I knew that my days as a full-time student with lots of discretionary time were quickly coming to an end.  I finally got off of my ass and wrote with the effort and work ethic that I should have employed a year ago so I could finish my first three chapters and submit them to my chair.  I knew that for about two months I wouldn’t have any time to devote to my dissertation so I did as much as I could before moving and starting a new job.  I finished new drafts of my chapters and submitted them to my chair the day before I began packing up and moving to Delaware.  It was a huge relief to have made substantial progress so I could move with a clear conscience and start a new job without this looming over me.

As I have settled into my new job, I have learned that I have been extraordinarily lucky by landing a job where my supervisor, director, and colleagues are extremely supportive of me completing this terminal degree  When I was offered this job, I wanted to negotiate a pay raise dependent on completion of my dissertation to incentivize it.  That wasn’t possible as my supervisor negotiated the highest pay she could get for me regardless of my doctorate or lack thereof.  But my supervisor wants me to finish my doctorate for my own benefit; when we discussed my goals for the year, she asked me to place this at the top of the list.  Today, she asked if I would like to carve some time out of my work schedule to work on my dissertation on a regular basis as a form of professional development.  I couldn’t ask for more and I now feel a responsibility to justify the support I have been given.

I was also very fortunate in that one of my faculty members reached out to me to offer advice about completing the dissertation while working full-time but I will post that advice in a separate post because it may be more interesting to a larger audience than news about my personal journey.

Plagiarism of ResNet Research

This does not represent the views or opinions of anyone other than myself.   Specifically but not exclusively, this does not represent the views or opinions of anyone with whom I have worked in the past, my employer, or anyone associated with ResNet, Inc.

I am very, very sad to have to write and publish this entry.  I have always thought very highly of ACUTA, the U.S. higher education professional organization that focuses on networking and telephony. They have produced high quality reports and conferences, including conferences and webinars at which colleagues and I have presented.  They were also very gracious in allowing me to visit their headquarters in Lexington, Kentucky, a few years ago to comb through some of their historical archives as I performed historical research.

Six months ago, on April 6, I contacted ACUTA to draw attention to the material in the then-recently released ACUTA ResNet Survey that is identical to material in previous research conducted by me and other colleagues loosely associated with the ResNet Symposium (now the ResNet Student Technology Conference). Although ACUTA initially claimed that any similarities were “inadvertent,” they later admitted that at least 15 of the 45 questions – one-third – on their survey are virtually identical to older questions copied without attribution.  Despite this admission, ACUTA has only impartially and reluctantly publicly acknowledged the previous work from which a substantial portion of their current survey was copied. In particular, the (a) summary report and infographic associated with ACUTA’s survey make no mention whatsoever of the previous work upon which those work are substantially built and (b) ACUTA website was only edited in the past few days, presumably in response to an e-mail I sent on September 28 allowing them one more week to make edits before making this issue public.

This is not a legal issue.  Although I am one of the copyright holders of the original 2005 and 2008 survey instruments and reports and I could pursue legal action against ACUTA and their contractor Forward Analytics, it is highly unlikely that I will do so.  I have no interest in making money from my original work or the work performed by ACUTA and Forward Analytics.  I’m not very interested in stopping ACUTA from conducting their surveys and publishing results; in fact, I’m quite pleased that the work is being continued and I am flattered that they believe that the survey instrument I helped create is of sufficient quality that they are reusing and building on it.

This is an ethical issue.  In academia, we respect the work that others have done by clearly drawing attention to it when we build on their work.  It is right to give people credit for what they have done, especially when we are benefiting from that work.  Moreover, it is essential that we give readers a clear idea of the provenance of our ideas so they can perform due diligence to assure themselves of the quality and rigor of our work.

It is not necessary to ask permission to build on the ideas of another; as far as I am concerned, ACUTA is welcome to use, modify, and adapt questions from the survey instruments I helped to develop. But it is necessary to give us credit, both to acknowledge the work that my colleagues and I did and to allow others to know where some of the content in the ACUTA survey originated.  I don’t think it’s asking very much when I have asked ACUTA to play by the same rules as everyone else in academia.  I am perplexed and saddened that half a year ago I initially contacted ACUTA and since then they have not taken a few minutes to add a sentence or a footnote to their documents acknowledging the work on which theirs is built.


Page from draft of 2005 ResNet survey

Page one of draft 7 of the 2005 ResNet Survey. Note (a) the date in the bottom right corner: January 10, 2005 and (b) a note at the very top noting the previous research most influential on this instrument, an internal note that was later expanded when we solicited responses and published results of the survey.

Plagiarism is a very serious charge.  ACUTA has acknowledged in private e-mail messages that many questions were copied from the 2005 and 2008 survey instruments.  I am not quite comfortable publicly publishing the contents of private e-mail messages but here are some examples of the evidence that originally led me to be concerned about this:

1. Based on ACUTA’s report, their survey instrument asked “Is your institution’s residential network separate from the rest of the campus network(s)?” with the response options of (a) Yes, only physically, (b) Yes, only logically, (c) Yes, both physically and logically, and (d) No.  In 2005, my colleagues and I asked “Is your residential computer network separate from the rest of the campus network(s)? with the response options of (a) Yes, our residential computer network is physically separate, (b) Yes, our residential computer network is logically separate, (c) Yes, our residential computer network is both physically and logically separate, and (d) No.

2. Based on ACUTA’s report, their survey instrument asked “How many staff members (FTE) provide direct support to your campus residential computer network and its users?”  In 2008, my colleagues and I asked “How many full-time equivalent (FTE) staff provide direct support to your campus residential computer network and its users?”

3. ACUTA’s report states that “50% of IT Departments pay for bandwidth supplied to the residential networks but do not recover the cost.”  In 2005, my colleagues and I asked “Who pays for the bandwidth available to the residential computer network and are the costs recovered? (Check all that apply)” with the response options of (a) An outside vendor supplies the bandwidth and recovers some or all of the cost through a charge to the university, (b) An outside vendor supplies the bandwidth and recovers some or all of the cost through resident fees, (c) Central IT pays for it and recovers some or all of the cost through fees to residents or interdepartmental charges to Housing, (d) Central IT pays for it and does not recover the cost, (e) The Housing department pays a non-university ISP and recovers some or all of the cost through rent or other fees, and (f) Other (please specify) [emphasis added].

4. ACUTA’s report states that respondents were asked “What organization on your campus is primarily responsible for maintaining the infrastructure of your residential computer network?” with two pie charts displaying the responses, one pie chart for the Logical Infrastructure and the other pie chart for the Physical Infrastructure.  In 2005, my colleagues and I asked “What organization on your campus is primarily responsible for maintaining the physical infrastructure of the computer network for your on-campus housing facilities? Examples of this responsibility may include physical installation and maintenance of wiring, network switches, and installing and repairing data ports. (Check all that apply)” and “What
organization on your campus is primarily responsible for managing the logical infrastructure of the computer network for your on-campus housing facilities? Examples of this responsibility may include configuring switches and routers, monitoring network traffic, administering servers (DHCP, DNS, etc.), and shaping/filtering network traffic. (Check all that apply)”

5. ACUTA’s report states that “About 9 % of higher education institutions report thet [sic] they are currently outsourcing all or significant portions of their residential network. Another 4% of survey respondants [sic] indicate they are currently considering oursourcing [sic], while 15% of institutions have considered outsourcing their residential network but have yet to pursue such an option.”  In 2005, my colleagues and I asked “Has your institution considered outsourcing any significant portion of the residential computer network, including its support or maintenance, to an outside
entity not affiliated with your institution?” with the response options of (a) Yes, we have outsourced significant portions to a non-university organization, (b) Yes, we have considered outsourcing to a non-university organization but not pursued it, (c) We are considering outsourcing to a non-university organization right now, (d) No, we have not seriously considered outsourcing to a non-university organization, and (e) Other (please specify).

New Job: Hello Assessment, Goodbye Student Affairs

Three weeks ago, I started a new job: Senior Research Analyst in the Center for Teaching and Assessment of Learning at the University of Delaware.  I have not updated this blog, responded to blog comments, or even looked at Twitter and some e-mail messages for the past month-and-a-half as I’ve been busy and focused on moving halfway across the country and and starting a new job.  That should change as I settle into things and regain my focus.

My new job focuses on assessment of student learning, particularly general education goals.  Some of that will involve analyzing existing assessment data and helping faculty and administrators understand the results, including providing them with concrete recommendations.  Some of that will involve working with others to create or modify plans to assess student learning.  I already know that I will work some with our ePortfolio program as our FIPSE ePortfolio grantpays for some of my salary. Similarly, I am already working with our Howard Hughes Medical Institute Undergraduate Science Education Program grant as that grant also funds a small part of my salary.  I am also very pleased to already be involved in consulting with faculty on research design and assisting my colleagues with teaching and learning workshops.

My new job, however, does not focus on or often interact with student affairs programs and staff. I have already applied the skills and knowledge I gained working with student affairs programs and earning a student affairs graduate degree so this is not a complete disconnect.  But I will not be working in the culture that has been most familiar to me throughout the first decade of my professional life and that is a little bit daunting and sad.  On the whole, however, I am ready to move on as I am very ready for some new challenges and I am very happy to work in assessment and faculty development.  I confess that a tiny bit of that is related to my experiences in my job search, especially the dearth of appropriate jobs in student affairs for someone with my deep and broad knowledge of technology.  I am also sad that I am leaving many of the professional communities that have been so important to me, particularly the less formal but more spontaneous ones like #satech and #sachat.  But on the whole I am very happy to have found a new home that will allow me to stretch my wings and apply many of my skills in analysis in a job whose fundamental function is to ask and answer very important questions.

As I move on, I will be working to tie up loose ends and bring some projects to closure.  In particular, I still plan to complete my research into student affairs professionals’ historical views and uses of technology; I am not sure what form that will take (journal article(s)? series of lengthy blog posts? interactive timeline?) but already know that I will not be presenting at #NASPAtech next month as originally planned.  Of course, I also have to complete my dissertation but that deserves a separate post entirely as that’s more complicated and not tied to student affairs.

I don’t think that this new job will dramatically or instantaneously change many of my broad interests or the topics of this blog aside from the obvious shift away student affairs, a shift that has been underway for quite some time now anyway.  The impact of and use of technology in higher education is still one of my primary interests and I hope that my new job will provide me with new insights and spark new questions.  For example, I am sure that my work with ePortfolios will inform my thinking.  Additionally, I still have connections with other researchers who actively work on interesting questions and plans to continue working with some of them.  And I’m already beginning to work with faculty here who share these interests, including one who is beginning to explore some possible predictors of student success in hybrid courses (compared to face-to-face courses), so the future is bright!

Dorm vs. Residence Hall: A Silly Debate Nearly 100 Years Old

In most professions, there are certain words or phrases that are used to mark oneself as a member, someone who is “in.”  Many student affairs professionals doggedly avoid referring to on-campus housing units as “dorms,” even going so far as to take offense at the term and trying to correct those who use the hated word.  The preferred term is “residence hall,” a phrase that is used because dorm is perceived by some as being too cold and distant to describe someone’s home.  This is an issue on which a significant amount of energy is spent – just google “dorm vs residence hall” and you’ll immediately be thrown into the battlefield.

Personally, I think the debate – one which sometimes becomes inexplicably heated and emotional – is very silly and is usually a waste of time and energy better spent on substantive issues.  But my point here isn’t to convince you that I’m right.  I only want to share a surprising finding from the historical documents I’m current reviewing: This debate has been raging for nearly 100 years!

The conference proceedings for the 1941 meeting of the National Association of Advisers and Deans of Men (NADAM), the organization that later changed its name to NASPA, includes a talk given by Mr. R. B. Stewart, Controller of Purdue University (no, I don’t know what that title means, either), on the topic of “Institutional Housing Policies.”  In describing the student housing at Purdue, he noted:

Our approach to the student housing program began in 1927, when we received funds and borrowed money for the erection of our first Residence Hall for men.  At that time, our Trustees eliminated from Purdue terminology the use of the word “dormitory”, and since that date we refer to our housing units as “residence halls,” intending to convey the fact that our units are something more that places to sleep and for one’s being.

Whoa!  I knew that this battle against the word dorm had begun before my time in higher education but I had no idea that it was this old!

Webinar Lessons Learned and Recommendations

At the research center I recently left, I was fortunate to be heavily involved in our webinars for a few years when we first started to conduct them.  After helping develop some of the routines and standard practices, including a checklist and standardized welcome slide, I remained somewhat involved the rest of my time but only on the periphery as we all became comfortable with the software, Adobe Connect.  Although some parts of those routines are common sense (e.g. “Register for an Adobe Connect account if not already registered several weeks in advance”), some were lessons we learned through only through experience and practice.  I don’t think any of them are unique to the research center at which I worked; these are lessons I have applied in my own work outside the center and ones I’ll carry with me to my next job.  The more interesting lessons and recommendations from the checklist we developed:

  1. Have a co-presenter that doesn’t actively present content but monitors text chat. This person can do many useful and important things that the presenter is typically too busy or focused to do, including (a) answer simple questions, (b) perform basic troubleshooting (without hesitating to tell someone, “Sorry, I don’t know what to do but here is a link with some common questions and answers about the presentation software. And we’re recording this webinar so if we can’t fix this then you can always come back later to view the recording.”), (c) pass along particularly interesting questions or questions asked by several attendees to the presenter so he or she can address them, (d) pass along URLs or other pertinent information as the presenter discusses particular topics, and (e) take notes about issues that arise in the webinar, particularly those that require follow-up.  This person can also play an important role in the text chat by modeling behavior: greeting attendees, reminding attendees that they can chat with one another, prompting attendees to ask the presenter questions, etc.  It’s often helpful for this person to physically be in the same room as the presenter as that makes it easier to get his or her attention to pass along important reminders and questions from attendees.
  2. Create a slide that will be displayed 15-20 minutes before the webinar until the webinar begins.  This slide should welcome attendees and give them important information that they may need for the webinar.  We created a standard slide that most of our presenters used in their webinars.  It not only included the title of the webinar and the time at which it would start but it also included technical recommendations and information such as (a) close other programs and applications, (b) visit the webinar vendor’s website to test your connection and web browser (e.g. this website if you’re using Adobe Connect), and (c) go to this webpage for further technical troubleshooting information (e.g. this website if you’re using Adobe Connect).  We regularly updated our standard welcome slide as we learned lessons in each webinar and received (sometimes negative) feedback from attendees.  For example, in response to attendee questions we began adding a link to presentation materials on this slide so attendees could download them before the webinar began.
  3. In the invitation e-mails and announcements, include a link to a registration form that allows registrants to submit questions in advance of the webinar.  This lets you understand some of the expectations of your audience so you can react accordingly.  Sometimes, we would add or tailor content to meet those expectations.  Occasionally we would receive questions that indicated that some registrants misunderstood the topic or the scope of the webinar so we would be sure that (a) we were very clear about the topic and scope at the beginning of the webinar and (b) our invitations, advertisements, and title were all accurate (so we could avoid similar problems in the future, if possible).
  4. Create a post-webinar survey allowing attendees to provide feedback and ask further questions. Have the survey live and available before the webinar starts and place a link to the survey at the end of the webinar so attendees can complete it immediately while they’re still at their computer.

It’s very, very helpful follow the first recommendation – have a co-presenter not focused on content but on attendees and other issues such as text chat – if at all possible.  Like many people, I have tunnel vision when I’m presenting material, especially in this strange context where you I can’t see or hear my audience beyond some abstractions that are very easy to miss or ignore.  So having someone who is not tightly focused on the content is incredibly helpful for me.  In practice, it was not uncommon for this co-presenter to quietly “save the day” in simple ways such as reminding the presenter to begin recording the webinar or swapping headsets with the presenter at the last minute. I had a lot of fun filling this role in many webinars, chatting with attendees in the text chat to answer their questions, help them, and encourage them to participate.

I’m sure that these lessons learned and recommendations aren’t unique to this one research center.  In fact, I know that others such as EDUCAUSE do many of the same things. If you’re involved with an online presentation or training, consider if any of these ideas might be helpful for you and your participants.

Training Application Reviewers Using Adobe Connect

For two years, I chaired the awards committee of Indiana University’s Graduate and Professional Student Organization (GPSO), a duty I relinquished a few months ago. During that time, I oversaw awards processes that awarded nearly $50,000 to Indiana University (IU) graduate and professional students to travel and conduct research. Of course, I didn’t review the hundreds of award applications by myself. Each semester, several dozen IU graduate and professional students reviewed the applications. IU provides students and staff access to Adobe Connect and I used that to orient and train these application reviewers. This worked out very, very well and in this post I will share how and why I used Connect so you can adapt parts of this process for your own use.

Adobe Connect is an online tool that allows participants to share documents and chat using video, voice, and text. It is often used to conduct webinars; my former employer uses it regularly and EDUCAUSE uses it very effectively for their (free and awesome) EDUCAUSE Live! sessions. I used Connect because it was available and very familiar to me, not because it has unique features missing from other similar tools. In particular, I used Connect to:

  1. Hold training sessions online hoping that they would be more convenient to schedule and attend than an in-person meeting.
  2. Record training sessions so those who could not attend live sessions could view them later.
  3. Communicate with attendees using a microphone to talk over some simple and necessary PowerPoint slides.
  4. Allow participants to use text chat to ask questions and make comments. Connect can allow others to use microphones and webcams but I wanted to keep things simple and the playing field level for attendees.
  5. Use poll questions during the training as described below.

The central premise in these training sessions is that reviewing award applications is very comparable to content analysis. So I approached these training sessions similar to the way in which I train content analysis coders:

  1. Provide a broad overview of the philosophy and purpose of the application review processes. This included not only a brief review of the mission of the GPSO Awards Committee but also an overview of the entire process, including those parts that fall outside the application review process. This broad overview helped reviewers not only better place their efforts in context but it also armed them with knowledge that helped them resolve ambiguities and unpredicted situations. If you’re the touchy-feely sort, it helped empower the reviewers. If you’re not, it helped reduce the number of questions they asked me. Either way, it was very helpful for the process.
  2. Introduce the award applications and the criteria used to review them. As I did this, I briefly tried to explain the rationale for the questions on the applications.
  3. Read through one or two completed applications (made-up or from previous semesters) and review them using the criteria previously introduced.  To do this, I showed on the screen the contents of each question and the relevant review criteria as I scored each question. It was important that I not only review the applications but also that I “think aloud” while doing so to help the reviewers understand how to apply the review criteria.
  4. Read through another completed application or two and have reviewers review them.  As I showed each question and relevant review criteria, I opened a poll question asking each reviewer to enter his or her score for that question. After a minute or two, I revealed the results of the poll question and we discussed the results. I also talked through how I would have scored that question to help reinforce the review criteria and clarify them. Some reviewers who scored the question differently compared to the rest of the group – and they knew that because I had shown the poll results – would ask questions and we would enter into a discussion to arrive at a consensus on how to score that question.

If this were content analysis, I would have repeated step 4 until we met a pre-determined interrater reliability index. But it’s not content analysis and it seemed unreasonable to ask the volunteer reviewers to spend that much time in training so we just reviewed one or two applications together.  After the training sessions, I sent to all of the reviewers (a) the link to the recording of the training session and (b) a copy of the PowerPoint slides.

This process seemed to work very well. It was easier to schedule the training sessions because I only had to worry about finding the best date and time for reviewers.  To schedule the training sessions, I provided several choices on the reviewer application form and used the results to determine the date and time when the most reviewers were available, prioritizing the preferences of new reviewers who had never been through training. It was trivial to record the training sessions and allow everyone, attendees and non-attendees alike, to view the recordings. Most importantly, I could use poll questions during the training to accurately gauge whether my reviewers had arrived at a consensus understanding of the review criteria. Hiding the results of the poll questions as they were being answered ensured that the responses I received were not influenced by reviewers who had already submitted their response and showing the results allowed reviewers to determine if their understanding was accurate or not.

I don’t know if anyone else uses Connect or similar tools to train application reviewers in this manner. I assume that others do something similar. The basic process is amenable to contexts other than award applications. As long as the technology is accessible, review criteria are well-defined, sample documents are available (they can be made up or old documents can be used – with permission, of course!), and a sufficient number of attendees can be convinced to attend (instead of waiting for the recording), this should work reasonably well in other situations where a group of people is being asked to use common criteria to review a set of documents e.g. conference program reviewers, search committees.  This process seemed to make my training sessions accessible and effective and I think they could work for others’ training sessions, too.

I apologize for not providing examples of these training sessions or training materials. I deleted most of my GPSO documents once I handed over my responsibilities to the new chair; there is so much confidential information in those documents that I don’t want access to them anymore! Even if I had access to the recordings and training documents, there is probably confidential material in them that would prevent me from making them publicly available anyway. Sorry! If you really want or need some examples, please let me know and I would happily mock some up for you.

Ongoing Research Into Student Affairs Technology History

Covers from old ACPA and NASPA conference proceedings. From upper-left, clockwise: NASPA 1930, NASPA 1950, ACPA 1942, ACPA 1932

I’ve written a few times about historical research I’ve done looking into how U.S. student affairs professionals have used and viewed technology throughout the 20th century.  Although I don’t know where my current job search will take me, I feel a responsibility to bring some closure to this research and then ensure it is somehow published or shared.

Much of my previous work was based on documents held at the National Student Affairs Archives at Bowling Green State University, especially the conference proceedings and programs for ACPA and NASPA.  My work is incomplete, however, because those (wonderful!) archives did not have most of the conference proceedings from the first half of the century.  However, another scholar told me that my own institution, Indiana University, has many of these proceedings.  Since I will probably be leaving Bloomington soon, I finally followed up on this tip and requested all of the conference proceedings in the IU library.  The two collections – IU and BGSU – complement each other very nicely, almost as if a single collection of all of the proceedings were divided evenly between the two libraries.  It would probably be a bibliographic faux pas to ask one of these libraries to donate their materials to the other one but it sure would be nice to have a nearly complete collection in one place.  At least the two universities are only a few hours apart so it’s not terribly burdensome for scholars who want to consult these materials.

I’ve only started reading through these documents and I’m already very glad that I requested them!   In just the handful of proceedings that I’ve read so far I’ve found interesting things such as:

  • Discussion of the negative effects of “mechanical devices” on education in 1928
  • A demonstration of IBM equipment for Deans of Men in 1950
  • A new program at the 1950 NASPA conference using audio recorders to collect and then distribute the distilled wisdom of its members.  In the opening session, NADAM President L. K. Neidlinger described this new program to attendees:

    You can also improve your mind and learn how to be a dean by going to the Recording Room, just off the lobby, at any time that suits your convenience, and asking the attendants there to hook you up to one of the tape recordings that we have been busy making last night and this morning. We are conducting there an interesting new experiment in convention technique. On each of several topics we have had a team of five deans record their experience and advice — all on the same tape. Anyone interested in these topics can pull up a chair, light a cigar, and listen at leisure to the advice of five colleagues who could not otherwise be interviewed so conveniently. He can then add his own comments by flipping a switch and talking. Furthermore, six months from now when you may have to educate a faculty committee on the facts of life about one of these topics, you will be able to write Fred Turner for the recording, borrow a machine, and bring these expert witnesses into your committee room.

  • A demonstration of the new Polaroid camera, with specific mention of its possible use in creating photographs for student IDs, in 1951

Even though I’ve just begun reading through these proceedings, I already have examples of (a) worry about the effects of technology on education and students, (b) discussions of the potential benefits of technology in student affairs administration, especially record keeping and processing, (c) demonstrations of new technology by vendors and pioneering institutions, and (d) innovative uses of technology initiated by members of the professional organizations themselves.  A history of regular and continued use of technology, including original innovations and cutting-edge uses, doesn’t seem to be part of the mainstream historical narrative of the student affairs profession but that seems to be the story I’m finding in the historical artifacts.

(Off-topic: Holy crap are these proceedings products of their times!  I knew that the history of these two professional organizations was very gendered given their historical roots but I didn’t expect the volume of casual sexism documented in these proceedings!  I did, however, expect some degree of racism and a large homophobia and – sadly – my expectations have been met.  I’m not even looking for these things but they often come screaming out of the pages. I’m reminded of a moment in this story where a college student asks during a discussion about the Founding Fathers: “If the Founders loved the humanities so much, how come they treated the natives so badly?” It’s mentally and spiritually jarring to read pages and pages of passionate discussion about the importance of each student and their intellectual and moral development followed by a casual dismissal of the competence of deans of women or a reminder of the psychological and moral depravity of homosexuality. The incongruity and dissonance makes me wonder what normal, accepted practices and beliefs we hold today will cause these “Holy crap!” moments for future generations when they read our e-mails and watch our videos.)