The first item on the agenda of this year’s State of the New conference of specific interest to me and appropriate for this blog was a panel discussion of Internet content filtering. The panel was formally titled “Internet Copyright Filters: Finding the Balance” and it was moderated by David Sohn of the Center for Democracy & Technology. An audio recording of the panel can be found on the Net Caucus’ Web page.
The panelists were:
- Mia Garlick, YouTube: Garlick didn’t seem to play a very prominent role in this discussion. She seemed to be a bit overshadowed by some of the other personalities on stage.
- Greg Jackson, University of Chicago: Jackson is the CIO at the University of Chicago. He also spoke at one of the most recent congressional hearings on P2P.
- Gregory Marchwinski, Red Lambda, Inc.: Red Lambda is a company that has commercialized technology originally developed at the University of Florida to address online copyright infringement on its residential computer network. Marchwinski testified before Congress about a year ago in a hearing focusing on online copyright infringement on college and and university campuses.
- Cary Sherman, Recording Industry Association of America (RIAA): Sherman is the president of the RIAA and he frequently appears on panels and before Congressional committees representing the recording industry.
- Gigi Sohn, Public Knowledge: Public Knowledge is a non-profit advocacy group that has often taken the lead on copyright-related issues and legislation. Sohn is their president and she was a very lively and commanding presence on the stage.
The discussion was a bit confusing as it wasn’t always clear if the panelists were addressing the topic of mandatory filtering (which almost everyone seemed to agree would be a bad idea, even Sherman) or the more general topic of filtering.
The panel opened with a brief introduction from each of the panelists. Sherman repeated the worn-out line that “piracy is devastating the industry” and advocated for filtering because it (a) is a very targeted approach and (b) can distinguish between infringing and non-infringing uses. Gigi Sohn was next in line and she laid our her organization’s opposition to mandatory Internet content filtering based on three points: (a) it would block legal speech, (b) it would not stop determined pirates (yes, she called them pirates; I was so disappointed that she ceded that ground right away and confused the issue from the very start), and (c) it would cause network degradation. Garlick then introduced herself as working for YouTube and then gave an overview of YouTube; I was a bit insulted by her overview but I guess there may have actually been people in the room who didn’t know about YouTube. The interesting bit from Marchowski’s introduction was an acknowledgment that encryption is a big challenge for filtering. Jackson introduced himself by admitting that he agreed with much of Sherman’s position, particularly the idea that copyright infringement must be addressed with a multi-faceted approach.
Jackson went further, however, in raising two very interesting points directly relevant to higher education. First, he wondered aloud if, given the size and complexity of the network at the University of Chicago, it would be cheaper to acquire a blanket license for music and movies than to attempt to effectively filter that content. He returned to this point later in the discussion by comparing network filtering with other kinds of filtering such as spam filtering. The primary difference between spam filtering and content filtering, he explained, is that spam filtering is done at the checkpoint(s) where e-mail enters and leaves the institution’s network. Network filtering, however, would have to performed on many thousands of network devices to be effective. However, Jackson’s point is valid only if we are seeking to filter content as it moves within the network; if we’re only interested in preventing content from entering or leaving the network, we can filter it at the chokepoints where the network connects to the Internet (or Internet2 or Lambda Rail or whatever) just as we do with packetshaping and other devices. Second, Jackson said that the majority of infringing content is not exchanged via P2P networks. I don’t know what research he was quoting but I need more detail to place his assertion in context. It sounds a bit fishy but if his statistic is limited strictly to P2P then we would need to know about network-based or -enabled infringement as to limit the discussion to P2P misses the point.
Many of the panelists agreed that encryption is a looming problem. Marchwinski compared the current arms race to a balloon: if you push on the balloon in one place, it simply expands in other places. Sherman acknowledged the problem and raised DRM and applications on end-user’s computers as a potential solution. However, he agreed that education is still the key (which makes one wonder: Why the hell isn’t the RIAA engaging in honest and effective education?). Gigi Sohn was quick to followup that education must not solely focus on what one can not do but must also include what one can do with copyrighted material. Sherman pleaded that we “not let fair use…be the excuse that stops the development of technology.” Jackson mentioned the University of Michigan’s “Be Aware You’re Uploading” program as an effective and interesting education effort.
Jackson raised several other interesting points during the discussion. One interesting datum is that he estimates that the University of Chicago spends between $100,000 to $200,000 responding to DMCA takedown notices (his own time, his staff’s time, and judicial affairs). He also reminded everyone that most organizations already practice some form of content filtering with our spam filters, anti-virus filters, and security-specific filters. The challenge is in how we decide what to filter.
One of the final points of discussion centered on potential First Amendment issues raised by an audience member. Gigi Sohn expressed that there may be implications even for non-governmental organizations given that copyright itself is enforced by the government. The larger point, she explained, is that “This is how people are communicating today. This is expression and it must be protected.”
There are issues that were not discussed during this panel that form important parts of the larger picture. Other panels discussed the concerns held by legislators and parents about content available to minors. The Morning Keynote by Rep. Mary Bono Mack (R-CA) focused heavily on intellectual property (but completely neglected fair use). Several speakers throughout the day were FCC commissioners and the prospect of government-mandated actions or restrictions seemed to loom over many discussions. However, it seemed to me that nearly everyone was in agreement that regulation for the Internet would be a bad idea (network neutrality crept up in a few discussions but seemed strangely absent the entire day as a substantive issue).
It seems to me that the principle problems with Internet content filtering are:
- It will never be completely effective. This is not a show-stopper as “must be 100% effective” is an unreasonable expectation or standard of review.
- It will never be able to distinguish infringing uses from non-infringing uses. Fair use is hard. It’s been argued that it’s intentionally complex in many ways to ensure that there are human beings, presumably learned and educated human beings (i.e. lawyers and judges) involved to ensure both the copyright holder and the alleged infringer are protected. Unless we dramatically change the laws to simplify fair use (which would probably be a bad thing, on the whole) we’ll never be able to programatically address fair use.
- Imposing filters and then requiring users to request exceptions, an approach advocated by many, seems to fundamentally and negatively affect innovation and creativity. Currently, we’re free to make use of copyrighted works without asking permission. We might be infringing on the copyright or we may be making fair use of it but the point is that we don’t have to ask permission; we can only be stopped afterwards and prevented from doing the same thing again. Requiring users to ask permission beforehand, even if permission were always given, would impose a barrier that is undesirable and harmful. Innovation and creativity must not be tied up by red tape and bureaucracy.
I haven’t covered all of the panel discussion and I feel that I haven’t covered it particularly well. I encourage you to listen to the audio of the discussion if you’re interested in this topic. I wouldn’t be at all surprised if special interests attempted to force colleges and universities to purchase, install, and use Internet content filtering. I would hope that such efforts would be defeated but if the trendline holds true then we’re due for some more ill-advised attempts to address the issue of online copyright infringement on campuses.Â But, as Jackson mentioned throughout the panel, we already already filtering content on our campus networks; we need to figure out how best to decide what to filter.
By the way, I’ve been told that the House version of the Higher Education Act (HR 4137 the College Access and Affordability Act) might go to vote this week. Contact your representative and register your opinion.