Category Archives: Reflection

I’m Not A Programmer But Programming Skills Are Still Extremely Useful

I don't work in IT, software development, or anything even closely related to those fields so I'm often surprised at how much programming I do in my daily work life.  At times I write scripts or light programs (e.g., this set of Excel macros), usually to save time and ensure accurate, well-documented, and reproducible results.  More often, I directly use some of the skills of programming, especially flow control and abstraction, to make tasks easier, elegant, or possible.

When I first entered college in 1996, I began as a computer science major.  After a few years I changed my major because I was dissatisfied with the amount and kind of programming I was doing.  I've never looked back and I've never pined for my missed life of code slinging. I have some sympathy for movements that purport to teach programming to anyone who is interested but I don't believe that programming is an essential skill for every person in the 21st century anymore than metal fabrication was an essential skill for every person in the 20th century.

A few concrete examples may be helpful.  First, I spend quite a bit of my time analyzing quantitative data e.g., student grades and assessment data, student retention data, and survey responses.  I usually do that analysis (and much of the pre-analysis work such as data cleanup, creation of new variables, aggregating and matching of different data sets) using SPSS, a statistical analysis program commonly used in the social sciences. Although SPSS can be operated almost entirely using point-and-click menus, my real work is done using the program's programming language (called "syntax" although it's really just a scripting language).  This makes my processes (a) self-documenting and (b) replicable.  In other words, by using and saving SPSS syntax in organized ways I always know exactly what I did and I can easily make changes or corrections.

Second, I seem to use programming logic quite often when working with larger surveys. I've become quite good using some of the more advanced features of Qualtrics, the online survey tool for which we have a site license. I declare variables and pass them around between surveys and reports using Qualtrics's "embedded data" feature.  I also use some of the different features of the tool that allow me to divide a given survey into different sections and selectively display only those sections that are relevant to a particular respondent.  Combining these features is allowing us to move a key assessment process for one of our academic departments from a cumbersome series of Excel spreadsheets and Word documents that is entirely manual to a Web-based process that still requires some manual data entry but has some built-in checks for data quality and reporting tools that are largely automated.

I'm not advocating that everyone must learn how to program.  I am advocating that those who regularly work with quantitative data – assessment folks, researchers, evaluators, analysts – learn some basic programming skills including flow control, abstraction, and uses of variables.  I spent part of my life trying to actively avoid programming and I've moved completely out of IT but programming skills have proven to be extremely valuable, useful, and sometimes essential.

The Psuedo-curriculum

I know this will be provocative for some of you but lately when I've heard people use the phrase "co-curriculum" I've silently translated it in my head to "psuedo-curriculum." I'll explain more below but understand that I am not devaluing out-of-class activities but expressing frustration that we don't really value them.

My frustration here has been long simmering but two strands of experience and thought are mingling and bringing things into focus for me.

First, I'm teaching another graduate course in pedagogy this semester. Last semester we focused on smaller details of teaching and learning largely by examining teaching methods (e.g., problem-based teaching, service learning, team-based learning) and lesson plans using the Decoding the Disciplines approach. This semester, we're focusing on larger details of teaching and learning using a problem-based learning approach to build a course using backward design and the principles in How Learning Works: Seven Research-Based Principles for Smart Teaching. Specifically, my students are building a first-year experience course. (I chose that as the central problem because it's one of the few courses that cuts across all disciplines so everyone could work on the same thing.  I've taken and taught similar classes in the past where students each created their own course specific to their discipline and I want to see if this pedagogy course turns out any better if I have everyone creating and working on the same kind of course.)

Although our pedagogy classes have traditionally been aimed at graduate students, my colleagues and I have made a concerted effort to open them up to post-docs, university administrators, and others who have the interest and drive to fully participate throughout the entire semester.  This semester, I reached out to my colleagues in residence life and two of their staff are in this class.  The course is still primarily geared toward graduate students who will pursue tenure-track positions but the ideas and principles are widely applicable to human learning and teaching which of course is the aim of the co-curriculum, too.

Of course, my residence life students have brought unique views and ideas to the course.  Among those views are reoccurring ones that they (a) don't have enough time with students for them to master – be introduced to, practice many times, and receive feedback about – skills and knowledge (as compared to courses that meet several times a week for several hours during a semester or degree programs that span many courses over many semesters) and (b) students don't value or understand the skills and knowledge they should be acquiring and practicing in the residence hall co-curriculum.  Those are legitimate points and I understand and share their frustration.

Second, general education reform is in the air at my university. There are plans and rumors, some of which have a very firm basis in reality, that we're about to make a serious run at updating, changing, or otherwise tackle general education.  Some of this is probably motivated by issues that we'll have to address next year when we write our Periodic Review Report, the document we write midway between each of our regional accreditation reviews that occur at ten-year intervals.  Some of it is probably motivated by our provost who is relatively new but has been here long enough to begin to build and carry out his agenda.  In any case, it's got me thinking a lot about our general education requirements and the other things that we require students to successfully complete before we award them a degree.

Here is where these two strands of thought coalesce: If the so-called co-curriculum were really as highly valued as the curriculum, students would (a) have to successfully complete – with measurable goals and evidence that they've attained them – co-curricular requirements and (b) be able to meet graduation requirements such as general education requirements not only through coursework but also through rigorous co-curricular activities.  In other words, if we valued the co-curriculum then it would genuinely stand alongside the curriculum and be part of the credentialing process that is modern higher education.

Yes, that does happen to some degree even at my university.  Most first-year students are required to live on campus and all first-year students are required to complete a First-Year Experience Seminar, a one-credit pass/fail course.  But I imagine that like many colleges and universities that require students to live on campus that the requirement is driven as much by financial reasons (we have huge bills to pay with those large buildings!) as by educational ones.  And I can't really argue that our FYS course is part of the co-curriculum since the vast majority of those courses are taught by faculty especially for the 60% of students who take specialized FYS courses offered within their major department and taught by their major faculty, often for 2-3 credits instead of the 1 credit of the "default" FYS course.

There may be other ways that the co-curriculum is genuinely valued at my university and I'm simply unaware of them.  I know that some other institutions have parts of the co-curriculum strongly integrated into their graduate requirements.  For example, a few universities such as Drexel and Northwestern have integrated cooperative education into their undergraduate experience in ways that make me very envious.  Some universities like Stanford have wonderfully advanced systems that allow and encourage students to add co-curricular activities (and artifacts!) to their official transcript.

Until we meaningfully integrate the co-corriculum into the undergraduate experience by (a) requiring students to measurably master some skills or knowledge through out-of-class activities or allowing students to meet existing requirements (i.e., general education requirements) through successful completion of rigorous out-of-class activities and (b) including those activities on transcripts and in degree audits, I will continue to mentally translate "co-curriculum" to "pseudo-curriculum" in my head.  Unless we meaningfully substatiate those activities by holding those who participate in them accountable for meeting genuine, realistic educational goals those activities will remain a false curriculum subordinate to the real one that we value with recognized metrics and credentials.

Are High Impact Practices Available Online?

I am still wrestling with my unease with MOOCs and I think I’ve finally figured out why: High impact educational practices, as we understand them today, are unlikely at best and impossible at worst in MOOCs and other similar online environments.

First, it’s helpful to understand that “high impact practice” (HIP) is a term of art.  Although the phrase is probably very common, in the past ten years or so the term has taken on special significance in U.S. higher education.  Popularized by George Kuh and emerging partly from research using data from the National Survey of Student Engagement (NSSE), this phrase has come to mean a particular set of activities that many higher education researchers believe are especially effective in promoting important and lasting changes in undergraduate students: First-Year Seminars and Experiences, Common Intellectual Experiences (i.e. core curricula), Learning Communities, Writing-Intensive Courses, Collaborative Assignments and Projects, Undergraduate Research, Diversity/Global Learning, Service Learning, Community-Based Learning, Internships, and Capstone Courses and Projects.

Unfortunately, we sometimes place too much focus on these particular activities without understanding why these activities have a high impact.  As originally described by Kuh in 2007, these practices share six characteristics:

  1. HIPs “demand that students devote considerable amounts of time and effort to purposeful tasks (p. 7)”
  2. HIPs place students in circumstances that require they “interact with faculty and peers about substantive matter (p. 7)”
  3. HIPs greatly increase the likelihood that students will interact with people who are different from themselves
  4. HIPs provide students with very frequent – sometimes continuous – feedback from faculty and peers
  5. HIPs require students to operate in intellectually complex ways by connecting knowledge in different courses and applying it in different contexts e.g. confronting complex real-world issues, investigating unfamiliar research problems
  6. HIPs occur in the context of a “coherent, academically challenging curriculum (p. 8)”

I am particularly interested in focusing on these characteristics of high impact practices as I will be helping lead a discussion on my campus next month focused on student engagement.  Most of the participants will be faculty and much of our focus will be on activities that faculty are using or can use in their curricula to promote student engagement.  Given that focus, I don’t think it would be helpful to focus on the specific activities identified as HIPs as those are often beyond the resources and purview of an individual faculty member.  Instead, we will focus on why those activities have a high impact so we can apply those principles to the activities within the power and resources of individual faculty.

That is what was on the forefront of my mind when I “attended” an EDUCAUSE Learning Initiative (ELI) online conference last week that focused on MOOCs.  The conference had some very active discussions among participants and as I participated in those discussions it occurred to me that one of the primary reasons I am uncomfortable with MOOCs is that it is difficult or impossible to apply much of what we know about good teaching in that environment.

Look back up at those six principles of high impact practices.  How do we do apply those principles in a MOOC?  More pointedly, can we apply those principles in a MOOC?  I despair that the answer is mostly “no.”  I pray that it is a simple lack of imagination on my part, a misunderstanding of what we can do in a MOOC, or that this is a fatal flaw of the dominant MOOC model that others will quickly recognize and fix or use to abandon that model.  I also confess that I don’t completely understand all of the discussions about “xMOOCs” and “cMOOCs” on anything but a very theoretical and abstract level and I have a sneaky suspicion that I’m missing something very important in how cMOOCs address some of these principles.

There is another interesting and hopeful way to think about this.  Another ELI conference attendee – I’m sorry that I don’t remember who – suggested that there may be other paradigms of effective educational practices that MOOCs might better fit.  Although I am a little bit skeptical that our understanding of effective education is going to be radically upended, this recommendation to not be too constrained by our current thinking is a very good one.  In fact, that is one important reason why I will be trying to steer our discussion here on my campus next month away from the specific activities and toward the broader principles so we can compare our thinking about student engagement with that of others’.  The idea isn’t to impose the model on my campus but to use it as a common starting point that must be adapted to our unique needs and resources.

That, of course, is what we’ll need to do with MOOCs: Use our best understanding of effective teaching and shape it to this unique environment with unique affordances.  I don’t know how to do that and I don’t know if that is what is being done.  I am wary that much of what is being done is not methodical and not built on what we know about how people learn.  I am especially skeptical that we can provide the kind of demanding and socially and intellectually connected experiences that we know provide some of the best learning.  I hope that people smarter than I are figuring this out, though, and working out how MOOCs can provide high impact educational practices.

Advantages and Challenges of Tech Skills in a Non-Tech Job

Posterboard showing a draft sitemap for part of a new website

The initial planning stages of our new website.

When I was on the job market last year, I was not interested in a job in IT.  Although I’m a little bit rusty in some areas after spending five years in a PhD program, I have the skills and background to work in IT as I’ve done in previous jobs.  But I was interested primarily in faculty development and assessment jobs.  For many of those positions, I deliberately downplayed my technology background because I wanted to be interviewed and hired because for my research and assessment skills.  I seem to have been successful because I’ve found a wonderful research analyst position in a teaching and assessment shop where my none of my job responsibilities involve technology maintenance, development, management, or support.

That has made the last couple of weeks a little bit strange as I’ve taken on significant responsibility in planning and supporting my center’s website as we finish merging our old websites and move them to a new content management system. This is the kind of project I was trying to avoid because it’s completely outside my current job description and I fear being typecast in this role. But you know what? It’s working out just fine. I am able to greatly help my colleagues and they are supportive and grateful but I don’t fear being relegated to being the unit’s tech support.

This is possible because of two things. First, I have wonderful coworkers who are very supportive. They are supportive of my growth as a professional as a higher education scholar, researcher, and faculty developer. They are also very respectful about how my time is used and the kinds of tasks they ask me to take on. Second, I have been very open about setting boundaries. I am always happy to help my colleagues wth technical issues but they are also okay with going through regular channels for larger, more complex issues even though I could spend time solving many of those, too. Because I have been very open with my colleagues about how I would like to use my experiences and knowledge and because my colleagues are wonderful, they have even shielded me a bit to ensure that others don’t try to take advantage of me. For example, despite our official tech support warming up to me very quickly (e.g., it only took a few minutes of me asking the right questions to be granted local admin access to my computer, something that apparently is a rarity here) my colleagues have intentionally made someone else our unit’s official point of contact with tech support to ensure they would not rely on me to solve problems or do extra work.

This seems to be working out well. My background gives me the knowledge to ask many of the right questions when we’re dealing with technology e.g., I can “geek out” with a colleague in IT to more effectively and efficiently probe for information as we figure out which content management system to use for our new website. My skills let me solve little problems in our office very quickly or recognize when problems are out of our control or even unsolvable. My experience guides me to help my colleagues make wise decisions that will be maintainable into the future even when I no longer work here e.g., we’re going to move to Qualtrics as our event registration system instead of using custom-built Google Forms that require significant technical skill to effectively use and maintain.

You can take the computer geek out of IT but apparently you can’t take the IT out of the computer geek. And I’m becoming okay with that.

Data Analysis MOOC Week 3: I’m a Dropout

Despite my best intentions, I have become another MOOC dropout.  Why am I not continuing to participate in this free course?

  1. The format isn’t compelling.  The course is primarily built around four components: video lectures and notes, weekly quizzes, a discussion board, and two peer-graded assignments. The lectures are alright and although there are many other online R resources it’s nice to have concise descriptions of R procedures specifically linked to data analysis. The discussion board is also helpful but there are many other places to find help with R. As discussed in my previous post, the weekly quizzes are very disappointing as they are the primary means by which students in this course practice what they learn but they offer very, very little feedback.My biggest regret is that I won’t experience the peer-graded  assignments. While the idea of requiring students to grade one another’s work is likely driven largely by the logistics of a MOOC, peer-graded assignments can be very powerful and worthwhile even in small classes.  That these assignments are the only non-quiz activities in the course is disappointing especially since there are only two non-quiz assignments.  Although it will be helpful that each student should receive feedback from several classmates (if it’s possible, I might provide feedback on the reports for some of my classmates even though I won’t be writing my own), it often takes more than two attempts for students to learn and begin to master new skills.
  2. Except for the peer-graded reports, there seems to be little reason for this course to be on a lockstep 8 week schedule. I might be able to stay with it if the timing were more flexible.  Even in the first three weeks of the course I’m having some trouble consistently making time to view all of the videos. I had planned to do this all at work as my supervisor supports this as important and valuable professional development but I’m having trouble doing that because it’s sometimes difficult to carve out the time and I feel guilty watching online videos at work for a non-credit course when I feel like I should be doing something more (visibly and authentically) productive.
  3. I can’t convince myself to participate in the two peer-graded reports, the only meaningful assignments in this course.  This is linked directly to the material of this specific course and is not a criticism of the course itself. I simply can’t muster the will to conduct additional data analysis and write additional reports for this course when those are two of my primary job duties.  It’s not that I don’t think that I could learn from the activities, develop new skills, and become a better data analyst and writer.  I just can’t bring myself to spend so much time analyzing data and writing reports unrelated to either my job or my research.  I am disappointed as I was looking forward to these substantive activities, especially being able to receive feedback from others and seeing how others approached the same activities.

Although I’m disappointed to have decided to not continue with the activities of this MOOC, I am happy to have enrolled and tried it out.  I will continue to download the course materials so I can reference them when I am ready to put them into practice in meaningful ways.

I have very mixed feelings about the broader concept of MOOCs.  It would take an extraordinary effort for an online course, especially a MOOC, to match the quality of the best face-to-face courses.  But the reality is that few face-to-face courses are “the best.”  Although the dominant MOOC model seems to mimic much of the worst lecture courses in traditional universities, even the worst course is sometimes good enough especially when the alternative to a crappy, frustrating, and largely self-driven education is no education at all.

Data Analysis MOOC Week 2: Muddling Through Frustration

I have watched the online videos and successfully completed the quiz for week 2 of the data analysis MOOC in which I am enrolled. I struggled quite a bit with some of the R syntax and that made the quiz a very frustrating experience. I have two observations to share about what I learned this week about the format of the course.

First, I am disappointed that so far the only opportunities for students to practice what is being taught and receive feedback is the weekly quiz.  I was able to muddle through things enough to get answers that matched the response options for this week’s multiple-choice quiz but despite answering all questions correctly I’m still very unsure of much of the content – I just know that I happened to somehow end up with answers that matched some of the ones included in the quiz.  Some of this is simply due to my lack of experience with R and its high learning curve.  But much of it is due to the fact that the multiple-choice quiz was the only opportunity to practice with any semblance of feedback and that feedback was restricted to an anemic “correct” or “incorrect” for each question with no additional feedback.

Yes, I can practice on my own some of the skills taught in this class.  This is certainly the case if I want to focus solely on learning how to use R – syntax, configuration, functionality, etc. – as the language provides immediate feedback with error messages or output.  But if that is the focus and if that’s sufficient to learn the skills then why do we need an organized course instead just a course packet or list of recommended self-guided topics and exercises?

What distinguishes an organized, well-taught class from a self-taught topic is that a class has an expert who not only make their thinking explicit but also offers targeted feedback for students as they practice the skills they are learning.  It’s conceivable that some skills could be taught using sophisticated, automated tools if we have a deep enough understanding of how people typically learn those skills that we can programmatically recognize mistakes and misunderstandings to provide appropriate, specific feedback.  Sometimes, this can be done to a (very) limited degree with appropriately designed multiple-choice instruments where the incorrect responses are designed to be diagnostic i.e., wrong answers aren’t merely incorrect but they’re designed to identify particular kinds of mistakes or misunderstandings.  That seems to be the case for some of the questions and answers in this MOOC but we’re not provided with any of the related feedback to help us understand what common mistake we may have made, how we might be misunderstanding the issue, and how we can work to correct our thinking.

Second, the size of the course requires innovative ways to provide support for students and this course seems to rely heavily on the course discussion board.  This is an observation, not a criticism. I’m quite comfortable using that medium as I’ve been using online discussion boards since the early 1990s when they were one of the primary draws for dial-up bulletin board systems (with the other major draw being online “door” games).  I don’t know how well this works for other students, however, as I don’t want to make assumptions about their experiences, skills, and cultures.  It’s probably not a big deal; my concern here is very minor and more of a curiosity about how other students experience and use (or don’t use) the discussion board. (In other situations I would be concerned about those who have poor or no Internet access or those who have little comfort and experience with the Internet but it’s reasonable to expect students who enroll in an online course to have sufficient Internet access and skills. I’m not suggesting that everyone has the access and skills to enroll in an online course, merely that those who are already enrolled in one presumably have the required access and skills.)

Data Analysis MOOC Week 1: I’m Going to Hate This

This semester, I have signed up for a data analysis class being taught in Coursera. This is a massively open online course (MOOC).  I’m tech savvy and well educated but it seems like the most responsible way for me to really learn about MOOCs is to gain some firsthand experience.  I also hope to learn some new data analysis techniques and ideas in this course.  The course will use R to analyze data so it will also be good to expand my (very limited) skills and knowledge with that powerful tool.

Going into this, I am very skeptical about what I understand the typical MOOC model to be with instruction primarily occurring using pre-recorded videos and quizzes with a discussion board as the primary means of communication between students and faculty.  I hope I’m wrong either about the model of instruction or about its effectiveness.  As an educator, I believe (and am supported by significant evidence) that the best learning occurs when experts make their thinking explicit through demonstration and give learners multiple opportunities for focused practice and feedback.  So my skepticism about the effectiveness of videos and quizzes as learning and teaching tools can best be summed up as: “Telling is not teaching.”  (Note that this applies just as forcefully to passive lecturing in physical classrooms!)

I’ve just started to get into the material for this course and so far it looks like my low expectations are going to be met: the course is built heavily around pre-recorded videos as the way for the faculty to teach students with weekly online quizzes and two peer-graded assignments as the only opportunities for us to “practice” what we are “learning.”  I hope I’m wrong and this proves to be much more enjoyable and rewarding that I think it will be!

New NSSE Survey and Technology Questions

I’m super excited that my colleagues have finally made the new version of the National Survey of Student Engagement (NSSE)  publicly available!  We’ve spent a lot of time working on this over the past 3-4 years, including focus groups, interviews, two pilot administrations, tons of literature review and data analysis, (seemingly) thousands of meetings, and many other events and undertakings.  I’ve been incredibly lucky to have been part of this process from nearly the beginning as I’ve learned a lot about survey development and project management.  I’m leaving NSSE at the end of the month so although I won’t be here when the new survey is administered next spring I’m still happy to be here to see the final version.

I’m particularly excited that the technology module (optional set of questions) has made it through all of our testing and will be part of the new survey.  There are other cool modules but this one has been my baby for over two years.  My colleagues here at NSSE – Allison and Heather – and my colleagues at EDUCAUSE – Eden and Pam – have been wonderful collaborators and I hope that they have had half as much fun and fulfillment working on these questions as I did.  It’s poignant to have spent so much time on this project only be handing it off to others just as it sees the light of day but I know it’s in good hands.  I am very hopeful that a significant number of institutions will choose to use this module and we will continue to continue to what we know about the role and impact of technology in U.S. and Canadian higher education.

Throughout all of this, I’ve remained especially thankful to have been so involved in the development of this new survey as a graduate student. Although I work half as many hours as the full-time doctorate-possessing research analysts, they have been very open about allowing me to be involved and never shied away from adding me to projects and giving me significant responsibilities.  I was never treated as “just a grad student” or a junior colleague, just one that worked fewer hours and had some different responsibilities.  Consequently, I had genuine responsibilities and made significant, meaningful contributions; I can honestly point to the survey and see my own fingerprints on some parts of it!  When I speak about meaningful educational experiences in the future, I’ll certainly think of this one as an excellent example.  And I will work to ensure that my students and colleagues can have similar experiences that allow them to learn, grow, and meaningfully contribute by performing important work with trust and support.

Media Spin and Attention Grabbing Headlines

The Washington Post published a story yesterday describing some research that says that college students today study less than college students in the past.   The story is largely based on a tiny bit of NSSE data that we first published several months ago describing self-reported time spent studying as it differs across majors.  At the moment, I’m less interested in the data and more interested in how it’s being reported and described.

First, I’m a bit amused that this is suddenly a hot topic given that the information was released 6 months ago.  In fact, it was covered very prominently in November by little-known websites like the New York Times, USA Today, and Chronicle of Higher Education.  I don’t know why the Post decided to write a story about this now (I suspect it has to do with an upcoming conference of higher education researchers, a conference heavily attended by my NSSE colleagues and one at which we frequently present new research).  But it’s amusing and informative that one story written by the Washington Post has set off a flood of blog posts and “news stories” about something that is old news.  Yes, I know that it’s still interesting and pertinent information but this seems to reinforce the sad fact that many blogs and “news sites” are very dependent on traditional media for content, even when that content has been available for months.

Second, I’m amused and saddened by the headlines that people are using to describe this research.  I know that many of the websites listed below are second- or third-rate and use headlines like these just to get attention (which drives up traffic and ad revenue – and which makes me a bit ashamed to be adding to their traffic and ad revenue!) but it still makes me sad.  Some example:

  1. Is college too easy? As study time falls, debate rises.“  This is the original Washington Post article.  It has a fairly well balanced headline.  It’s not over-the-top and it even notes that the issue is not settled as people debate it.
  2. Is College Hard? Students Are Studying Less, Says Survey“  The Huffington Post’s headline isn’t too far from the one used by the Washington Post.  Although I loathe the Huffington Post and how the vast majority of its content is blatantly derivative and unoriginal, this is a decent little summary of the Washington Post article and an alright headline.
  3. Laid-Back Higher Ed” This is how The Innovation Files describes the Washington Post article and the research it describes.  Not horrible but not very good either.  At least it’s not as bad as…
  4. Fun Time Is Replacing Study Time in College” I don’t know anything about FlaglerLive.com but based on this ridiculous and inaccurate headline and blog post I won’t be spending any time there.  I’m particularly impressed by the figure that they copied directly out of the NSSE 2011 Annual Results that they claim is “© FlaglerLive.”  Classy.

 

Please Step Away From the Infographic!

I’ve tried very hard to be nice but I can’t bite my tongue any longer: Please, stop it with the infographics.  Most of them are bad.  If I were still a bratty 15-year old, I would dryly say that “I feel dumber for having read that” after seeing most infographics.  But I’ll be more professional and offer some specific criticisms.

Most infographics:

  1. Obliterate nuance and ignore subtleties and differences by carelessly aggregating many different sources of information.  By no means am I opposed to integrating knowledge and synthesizing data from multiple sources!  But it must be done carefully because it’s rare that different studies or sources of data align well.  When it’s done carelessly we can draw false conclusions.  These problems compound as more sources are thoughtlessly tossed together until we’re saying things that we simply don’t know are true.
  2. Don’t tell us where the data come from.  Sure, many infographics have a list of sources at the bottom.  But most of the time that’s all we get: An unordered list that doesn’t tell us which bits of information came from which sources.  I guess that kind of list is better than nothing, but not by much.  This is quite puzzling and frustrating because it seems like such an easy thing to fix.  Infographics designers, please look up “footnotes” and “endnotes” because this is a problem we solved a long time ago.
  3. Don’t need to exist in the first place because the “graphics” add nothing to the “information” being conveyed.  I know that infographics are the hip, new thing (I know they’re neither hop nor new – play along because many people still believe that!) but if your message can be better communicated through a different medium then you’re hurting yourself and impeding your message by forcing it into an unhelpful series of “graphics.”

Of course, I’m not the first one to whine about the infographic plague.  For example, Megan McArdle is spot on when she notes that most infographics are created by hacks who haven’t done any research or produced anything useful but want to convince you that they’re experts so you’ll hire them or buy something from them.  I’m also sure that someone has eviscerated the banal characteristics of the infographic genre (e.g. color palette lifted straight from the early-mid 2000s Web 2.0 explosion, percentage values liberally scattered about in large fonts).

A great (?) example of a terrible infographic is this one recently published by Mashable.  It meets all three of the criteria listed above.  Sadly, most infographics I’ve seen meet at least two if not all three of those criteria.

But not all infographics are terrible.  It’s very simple but this one recently published by Bloomberg is effective and informative.   The infographic that is displayed when you click on the “Cost to students & school” button on the left is ok.  But the bar graphs displayed when you click on the “Conference comparison” button are very informative and useful.

Before you make your next infographic or start passing around a link to an infographic, please consider whether the infographic avoids the three pitfalls listed above.  If it doesn’t, please step away from the infographic!