Hard Questions for #iLaw2011's Freedom of Information/Arab Spring Sessions

We’ve revived the iLaw program after a five-year hiatus. This year, it’s an experiment in teaching at Harvard Law School: part class (for about 125 students) and part conference (with friends from around the world here for the week). And JZ has taken the baton from Terry Fisher as our iLaw Chair.  An exciting day.

I’ve been preparing for two sessions on Day 1: “Freedom of Expression and Online Liberty” and then a case study on the Arab Spring (which will feature, among others, our colleague Nagla Rizk of the American University in Cairo). I’ve been thinking about some of the hard questions that I’m hoping we’ll take up during those sessions.

– What effect does a total shutdown of the network have on protests? I’ve been enjoying reading and thinking about this article on SSRN.  The author, Navid Hassanpour, argues (from the abstract): “I argue that … sudden interruption of mass communication accelerates revolutionary mobilization and proliferates decentralized contention.”

– We’ve assigned two chapters from Yochai Benkler’s landmark book, the Wealth of Networks (the introduction and the first 22 pages of chapter 7, which you can read freely online).  I am trying to figure out how well Yochai’s theoretical from a few years ago is holding up.  So far, so well, I think.  The examples in the second chapter that we assigned – Sinclair Broadcasting and Diebold – feel distant from the Arab Spring and Wikileaks examples that are front-of-mind today.  But the essential teachings seem to be holding up very well.  How might we add to the wiki, as it were, of WoN, knowing what we now know?  (Another way to look at this question, riffing off of something Yochai hits in his own lecture: what was the role of Al-Jazeera and other big media outlets, in combination with the amateur media and organizers?)

– We have gotten very good at studying some aspects of the Internet, as a network and as a social/political/cultural space.  We can show what the network of bloggers or Twitterers look like in a given linguistic culture.  We can show what web sites are censored where around the world (see the ONI).  We can survey and interview people about their online (and offline) behaviors.  But lots of things move very fast online and in digital culture, and it’s hard to keep up, in terms of developing good methods and deploying them.  What are the things that we’d like to be able to know about that we haven’t learned yet how to study?  Plainly, activity within closed networks like Facebook is a problem: lots is happening there, and surveys of users can help, but we can’t do much in terms of getting at Facebook usage patterns through technology (and there are privacy problems associated with doing so, even if we could).  Mobile is another: our testing of Internet filtering, for instance, is mostly limited to the standard web-browsing/http get request type of activity.  What else do we want/need to know empirically, to understand politics, activism, and democracy in a networked world?

– How much did the demographic element — a large youth population in several Middle East/North African cultures — matter, if at all, with respect to the Arab Spring?  How important were the skills, among elite youth primarily, to use social media as part of its organizing?

– How did the online organizing of the Arab Spring mesh with the offline activism in the streets?

– How much did the regional element matter, i.e., the domino quality to the uprisings?  Does this have anything to do with use of the digital networks, shared language, and social/cultural solidarity that crossed geo-political boundaries?

– What, if anything, does the Wikileaks story have to do with the Arab Spring story?  Larry Lessig pulls them quickly together; Nagla Rizk and Lina Attalah balk at this characterization.  We’ll dig in this afternoon.

– [Student-suggested topic #1, via Twitter:] What’s the effect of the US State Department’s Internet Freedom strategy?

– [Student-suggested topic #2, via Twitter:] Does the distribution/democratization of channels of discourse undercut rather than support dissent, organizing, etc.?

There’s much more to unpack, but these are some of the things in my mind…

Research Confidential and Surveying Bloggers

In our research methods seminar this evening at the Berkman Center, we got into a spirited conversation about the challenges of surveying bloggers.  In this seminar, we’ve been working primarily from a text called Research Confidential, edited by Eszter Hargittai (who happens to be my co-teacher in this experimental class, taught concurrently, and by video-conference, between Northwestern and Harvard). The book is a great jumping-off point for conversations about problems in research methods.

The two chapters we’ve read for this week were both excellent: Gina Walejko’s “Online Survey: Instant Publication, Instant Mistake, All of the Above” and Dmitri Williams and Li Xiong’s “Herding Cats Online: Real Studies of Virtual Communities.”  Both chapters are compelling (as are the others that we’ve read for this course).  They tell useful stories about specific research projects that the authors conducted related to populations active online.  In support of our discussion about surveys in class, these two chapters tee up many of the issues that we ought to have raised in this conversation.  Gina also came to class to discuss her chapter with us, which was amazing.  (Come to think of it, I would also have liked to have met the two authors of the second chapter; they wrote some truly funny lines into the otherwise very serious text.)

In a previous class, we started with Eszter’s Introductory chapter, “Doing Empirical Social Science Research,” as well as Christian Sandvig’s “How Technical is Technology Research? Acquiring and Deploying Technical Knowledge in Social Research Projects.”  These two chapters were a terrific way to start the course; I’d recommend the pairing of the two as a possible starting point for getting into the book, even though they’re not presented in that order (with no disrespect meant for those who chose the chapter order in the book itself!).

While many of Research Confidential’s chapters bear on the special problems prompted by use of the Internet and the special opportunities that Internet-related methods present, the book strikes me as very useful read for anyone conducting research in today’s world.  I strongly recommend it.  The mode of the book renders the text very accessible and readable: unlike most methods textbooks, this book is a series of narratives by young researchers about their experiences in approaching research problems, some of them related to the Internet and others not so technical in nature.  As a researcher, I learned a great deal; as a reader, I thoroughly enjoyed the book’s stories.

Solicitor General's Brief in Cablevision Case

The United States Solicitor General’s office has filed its brief (posted online here) in the long-running RS-DVR matter, popularly referred to as the “Cablevision” case. The brief is terrific. The United States takes the position that the Supreme Court should not review the case, which had been decided unanimously by the Second Circuit in favor of the cable companies. This case has significant copyright implications, as well as implications for the balance of power between cable providers and those who hold copyright interests in television and movie programming.

The Solicitor General takes the position that the case did not meet the traditional standard for the Supreme Court to grant cert and that the Second Circuit “reasonably and narrowly resolved the issues” before it. The reasoning in the brief is persuasive.

For more information: Several news outlets have the story. (The Reuters piece says that the SG “denied” the plaintiffs’ request for a hearing, which — at least in technical terms — overstates the matter a bit by implying decision-making authority in the SG. Though the Court asked for the SG’s opinion, the Court reserves the right to decide whether or not to hear the case. Practically speaking, though, that seems somewhat unlikely now, after the filing of this strong brief.) For previous coverage which touches on the procedural aspects of the case, see, e.g., an article by the LA Times’s David G Savage from January, 2009. Also, see the press release and summary page on the case published by Public Knowledge, which has worked on this matter; Gigi Sohn, the president, says she is pleased with the SG’s brief.

By way of disclosure: the United States Solicitor General and counsel of record in this matter, Elena Kagan, is my former boss when she was dean of Harvard Law School for six years prior to her appointment to the Obama Administration.

Online Intermediaries

Issues swirling around Craigslist have given rise to a new round of consideration of our liability scheme of online intermediaries. David Ardia — a very thoughtful observer of this scene, a Berkman fellow, and director of our Citizens Media Law Project — comments on a podcast at Legal Talk Network. The themes are similar to those that Adam Thierer and I took up in a debate at ArsTechnica recently.

This discussion of intermediary liability is only going getting more important as time passes. Follow along as the issue develops at CMLP’s new Section 230 site.

Pushing Forward on the Legal Casebook Idea

There’s a lot of energy coming out of the Collins/Skover/Rubin/Testye workshop of a few weekends ago on the next-generation legal casebook.  It’s the sign of a great gathering: after you’ve landed at your home airport, you are still thinking about the issues that you were kicking around at the conference.  I think it’s also a sign of the strength of the idea: something of this sort *will* happen if we keep that energy up. 

One follow-up is a call that Gene Koo and CALI has organized to see if cyberlaw law professors would want to be first up.  It’s a very practical next step, and one with promise.  As one such cyberlaw prof, I’m definitely in.  This specific project is an obvious follow-up to much of what JZ has been working on for years, through H20 and otherwise.

Turkey at the Edge

The people of Turkey are facing a stark choice: will they continue to have a mostly free and open Internet, or will they join the two dozen states around the world that filter the content that their citizens see?

Over the past two days, I’ve been here in Turkey to talk about our new book (written by the whole OpenNet Initiative team), called Access Denied. The book describes the growth of Internet filtering around the world, from only about 2 states in 2002 to more than 2 dozen in 2007. I’ve been welcomed by many serious, smart people in Ankara and Istanbul, Turkey, who are grappling with this issue, and to whom I’ve handed over a copy of the new book — the first copies I’ve had my hands on.

This question for Turkey runs deep, it seems, from what I’m hearing. As it has been described to me, the state is on the knife’s edge, between one world and another, just as Istanbul sits, on the Bosporus, at the juncture between “East and West.”

Our maps of state-mandated Internet filtering on the ONI site describe Turkey’s situation graphically. The majority of those states that filter the net extensively lie to its east and south; its neighbors in Europe filter the Internet, though much more selectively (Nazi paraphernalia in Germany and France, e.g., and child pornography in northern Europe; in the U.S., we certainly filter at the PC level in schools and libraries, though not on a state-mandated basis at the level of publicly-accessible ISPs). It’s not that there are no Internet restrictions in the states in Europe and North America, nor that these places necessarily have it completely right (we don’t). It’s both the process for removing harmful material, the technical approach that keeps the content from viewers (or stops publishers from posting it), and the scale of information blockages that differs. We’ll learn a lot from how things turn out here in Turkey in the months to come.

An open Internet brings with it many wonderful things: access to knowledge, more voices telling more stories from more places, new avenues for free expression and association, global connections between cultures, and massive gains in productivity and innovation. The web 2.0 era, with more people using participatory media, brings with it yet more of these positive things.

Widespread use of the Internet also gives rise to challenging content along with its democratic and economic gains. As Turkey looks ahead toward the day when they join the European Union once and for all, one of the many policy questions on the national agenda is whether and how to filter the Internet. There is sensitivity around content of various sorts: criticism of the republic’s founder, Mustafa Kemal Atatürk; gambling; and obscenity top the list. The parliament passed a law earlier in 2007 that gives a government authority a broad mandate to filter content of this sort from the Internet. To date, I’m told, about 10 orders have been issued by this authority, and an additional 40 orders by a court to filter content. The process is only a few months old; much remains to be learned about how this law, known as “5651,” will be implemented over time.

The most high-profile filtering has been of the popular video-sharing site, YouTube. Twice in the past few months, the authority has sent word to the 73 or so Turkish ISPs to block access, at the domain level, to all of YouTube. These blocks have been issued in response to complaints about videos posted to YouTube that were held to be derogatory toward the founder, Ataturk. The blocks have lasted about 72 hours.

After learning from the court of the offending videos, YouTube has apparently removed them, and the service has been subsequently restored. YouTube has been perfectly accessible on the connections I’ve had in Istanbul and Ankara in the past few days.

During this trip, I’ve been hosted by the Internet Association here, known as TBD, and others who have helped to set up meetings with many people — in industry, in government, in journalism, and in academia — who are puzzling over this issue. The challenges of this new law, 5651, are plain:

– The law gives very broad authority to filter the net. It places this power in a single authority, as well as in the courts. It is unclear how broadly the law will be implemented. If the authority is well-meaning, as it seems to me to be, the effect of the law may be minimal; if that perspective changes, the effect of the law could be dramatic.

– The blocks are (so far) done at the domain level, it would appear. In other words, instead of blocking a single URL, the blocks affect entire domains. Many other states take this approach, probably for cost or efficiency reasons. Many states in the Middle East/North Africa have blocked entire blogging services at different times, for instance.

– The system in place requires Internet services to register themselves with the Turkish authorities in order to get word of the offending URLs. This requirement is not something that many multinational companies are going to be able or willing to do, for cost and jurisdictional issues. Instead of a notice-and-takedown regimes for these out-of-state players, there’s a system of shutting down the service and restoring it only after the offending content has been filtered out.

* * *

The Internet – especially in its current phase of development – is making possible innovation and creativity in terms of content. Today, simple technology platforms like weblogs, social networks, and video-sharing sites are enabling individuals to have greater voice in their societies. These technologies are also giving rise to the creation of new art forms, like the remix and the mash-up of code and content. Many of those who are making use of this ability to create and share new digital works are young people – those born in a digital era, with access to high-speed networks and blessed with terrific computing skills, called “digital natives” – but many digital creators are grown-ups, even professionals.

Turkey is not alone in how it is facing this challenge. The threat of “too much” free expression online is leading to more Internet censorship in more places around the world than ever before. When we started studying Internet censorship five years ago, along with our colleagues in the OpenNet Initiative (from the Universities of Toronto, Cambridge, and Oxford, as well as Harvard Law School), there were a few places – like China and Saudi Arabia – where the Internet was censored.

Since then, there’s been a sharp rise in online censorship, and its close cousin, surveillance. About three dozen countries in the world restrict access to Internet content in one way or another. Most famously, in China, the government runs the largest censorship regime in the world, blocking access to political, social, and cultural critique from its citizens. So do Iran, Uzbekistan, and others in their regions. The states that filter the Internet most extensively are primarily in East Asia, the Middle East and North Africa, and Central Asia.

* * *

Turkey’s choice couldn’t be clearer. Does one choose to embrace the innovation and creativity that the Internet brings with it, albeit along with some risk of people doing and saying harmful things? Or does one start down the road of banning entire zones of the Internet, whether online Web sites or new technologies like peer-to-peer services or live videoblogging?

In Turkey, the Internet has been largely free to date from government controls. Free expression and innovation have found homes online, in ways that benefit culture and the economy.

But there are signs that this freedom may be nearing its end in Turkey, through 5651 and how it is implemented. These changes come just as the benefits to be reaped are growing. When the state chooses to ban entire services for the many because of the acts of the few, the threat to innovation and creativity is high. Those states that have erected extensive censorship and surveillance regimes online have found them hard to implement with any degree of accuracy and fairness. And, more costly, the chilling effect on citizens who rely on the digital world for their livelihood and key aspects of their culture – in fact, the ability to remake their own cultural objects, the notion of semiotic democracy – is a high price to pay for control.

The impact of the choice Turkey makes in the months to come will be felt over decades and generations. Turkey’s choice also has international ramifications. If Turkey decides to clamp down on Internet activity, it will be lending aid to those who seek to see the Internet chopped into a series of local networks – the China Wide Web, the Iran Wide Web, and so forth – rather than continuing to build a truly World Wide Web.

Francois Leveque on Standards, Patents, and Antitrust

As part of our Berkman@10 celebration this year, we at the Berkman Center tonight welcome Francois Leveque, professor at the Ecole des Mines, Paris, and visiting prof at the faculty of law at UC Berkeley. He’s presenting the findings of two new papers, each co-authored with Yann Meniere: “Technology standards, patents and antitrust” and “Licensing commitments in standard setting organizations.”

Prof. Leveque offers us a series of insights about the interaction of economics and law in the context of patents in the standards setting process. One key finding of his papers: it would be best for consumers and for innovation in general for the licensing of patents by players in standards setting processes to occur ex ante, rather than ex post. More surprising, he and M. Meniere argue that it also may be better, under some circumstances for the patent holder also to set the royalty level ex ante. He notes that, in this setting, the interests of consumers and patent owners are aligned. As he goes on to explain, in other settings, these interests may not be so well aligned. Read the papers for more insights, including with respect to the VITA royalty cap policies, ways to mitigate the costs of the risk of hold-up, and his proposal of announcing a royalty cap ex ante as a more flexible means of accomplishing such mitigation while still enabling patent holders to revise the royalties.

Prof. Leveque very kindly participated in both the Weissbad (Switzerland) and Cambridge (MA, USA) workshops that guided our work on Interoperability and Innovation over the past year. His interventions were crucial to informing our understanding of these complicated matters and he was unusually generous with his input, for which Urs Gasser and I and our teams are extremely grateful.

Yahoo!, the Shi Tao Case, and the Benefit of the Doubt

Rep. Tom Lantos has called on Yahoo! executives to return to Congress to talk about what they knew and when in the Shi Tao case. Rep. Lantos alleges that Yahoo!’s general counsel misled a hearing (at which I and others submitted testimony, too) in 2006 by indicating that the company knew less than it actually did about why the Chinese state police were asking for information about Shi, a dissident and journalist. Yahoo! did turn over the information; the Chinese prosecuted Shi; he remains in jail; and the issue continues to point to the single hardest thing about our US tech companies doing business in places that practice online censorship and surveillance. The case has led to Congressional hearings, proposed legislation, shareholder motions, and lawsuits against Yahoo!

(For much more on the general topic of Internet filtering and surveillance, see the OpenNet Initiative’s web site, a consortium of four universities of which we are a part: Cambridge, Harvard Law School, Oxford, and Toronto.)

The hard problem at the core of this issue is that police come to technology companies every day to ask for information about their users. It is a fair point for technology companies to make that they often cannot know much about the reason for the policeman’s inquiry. It could be completely legitimate: an effort to prevent a crime from happening or bringing a criminal to justice. In the United States, these requests come in the context of the rule of law, including a formal reliance on due process. And every once in a while, a technology company pushes back on requests for data of this sort, publicly or privately. The process is imperfect, if you consider it from a privacy standpoint, but it works — a balance is found between the civil liberties of the individual and the legitimate needs of law enforcement to keep us safe and to uphold the rules to which we all agree as citizens.

This hard problem is much harder in the context of, say, China. It’s not the only example, but it’s the example here with Shi Tao. In Yahoo!’s testimony in 2006, Michael Callahan, the executive vice president and general counsel, said that Yahoo! did not know the reasons for the Chinese state police’s request for information about Shi.

You can read the testimony for yourself here. The relevant statement by Mr. Callahan is:

“The Shi Tao case raises profound and troubling questions about basic human rights. Nevertheless, it is important to lay out the facts. When Yahoo! China in Beijing was required to provide information about the user, who we later learned was Shi Tao, we had no information about the nature of the investigation. Indeed, we were unaware of the particular facts surrounding the case until the news story emerged.” (Emphasis mine.)

The key phrase: “No information about the nature of the investigation.” Not that the information was inconclusive, or vague, or hard to translate, or possibly of concern. “No information.”

Now, we are told, there’s a big disagreement about whether that testimony was accurate.

Rep. Lantos, in a statement yesterday, claims that Callahan misled the committee. Lantos writes: “”Our committee has established that Yahoo! provided false information to Congress in early 2006. … We want to clarify how that happened, and to hold the company to account for its actions both before and after its testimony proved untrue. And we want to examine what steps the company has taken since then to protect the privacy rights of its users in China.” Rep. Chris Smith (R-NJ) says it more harshly: “Last year, in sworn testimony before my subcommittee, a Yahoo! official testified that the company knew nothing ‘about the nature of the investigation’ into Shi Tao, a pro-democracy activist who is now serving ten years on trumped up charges. We have now learned there is much more to the story than Yahoo let on, and a Chinese government document that Yahoo had in their possession at the time of the hearing left little doubt of the government’s intentions. … U.S. companies must hold the line and not work hand in glove with the secret police.”

Yahoo! responded with its own statement, pasted here in full:

“Yahoo! Statement on Foreign Relations Committee Hearing Announcement
October 16, 2007

“The House Foreign Affairs Committee’s decision to single out Yahoo! and accuse the company of making misstatements is grossly unfair and mischaracterizes the nature and intent of our past testimony.

“As the Committee well knows from repeated meetings and conversations, Yahoo! representatives were truthful with the Committee. This issue revolves around a genuine disagreement with the Committee over the information provided.”

“We had hoped that we could work with the Committee to have an open and constructive dialogue about the complicated nature of doing business in China.”

“All businesses interacting with China face difficult questions of how to best balance the democratizing forces of open commerce and free expression with the very real challenges of operating in countries that restrict access to information. This challenge is particularly acute for technology and communication companies such as Yahoo!.”
“As we have made clear to Chairman Lantos and the Committee on Foreign Affairs, Yahoo! has treated these issues with the gravity and attention they demand. We are engaged in a multi-stakeholder process with other companies and the human rights community to develop a global code of conduct for operating in countries around the world, including China. We are also actively engaged with the Department of State to assist and encourage the government’s efforts to deal with these issues on a diplomatic level.”

“We believe the answers to these broad and complex questions require a constructive dialogue with all stakeholders engaged in a collaborative manner. It is our hope that the Committee will approach the hearing in that same constructive spirit.”

I can understand why Yahoo! is claiming that they are being treated unfairly. Yahoo! has been the company that has been most tarred, in some ways, for a problem that is industry-wide, and should be resolved on an industry-wide (or broader, such as law or international law) basis. Yahoo! has been a very constructive player in the ongoing effort to come up with a code of conduct for companies in this position (along with Google, Microsoft, and others). And Yahoo! has been working hard to establish internal practices to head off similar situations and voicing its concern about Chinese policies in this arena. Their efforts since the Shi Tao case on this front have been laudable.

But if in fact the company knew more — even a little bit more — about why the Chinese police came knocking for Shi Tao than what Mr. Callahan led all of us to believe, (“no information”), then it is a big problem. Unless there are facts that I’m missing, for the Congress to call Yahoo! back to Capitol Hill to correct the record, in public, is completely appropriate, if “no information” is not what we were meant to understand. It may well be that what the company knew was in fact so vague, as many legal terms are in China, as to be inclusive. It may well be that someone in the company knew, but the right people didn’t know — and that an internal process was flawed in this case. But those are very different discussions, ones we should have, than the straight-up problem that the company didn’t have context for the request.

Because I respect many of the people working hard on this issue within Yahoo!, and credit that Jerry Yang is very well-meaning on this topic, I’ve been willing to give Yahoo! a big benefit of the doubt. After all, a key part of our own legal system — as part of a rule of law that we’ve come to trust here — calls on us to do so. The big problem here for me is if we’ve in fact been misled, all of us, to believe that it was one problem when it really was quite another. If “no information” proves to be inaccurate, I’m not sure how much longer I can keep extending that benefit of the doubt in this case.

(The Merc’s Frank Davies wrote up the story here, among a few hundred others in the last 24 hours. Rebecca MacKinnon, of course, had the story months before (also here) and said already much what I’ve said here.)

WaPo on the Myanmar Internet Crackdown

Roby Alampay nails some of the key issues related to Internet governance and international law in an editorial today in the Washington Post. It’s well worth a read, especially if you’ve been following the Myanmar crackdown. Alampay also makes a key link: the issue of Internet access should be perceived to be a human rights issue, and one which those thinking about Internet governance ought to take up.

In relevant part: “States have come far in such discussions and in reaching some levels of consensus. International standards have greater impetus, evidently, when they seek to cap that which they perceive as threatening to the civilized world: child pornography, organized crime, terrorism, and SPAM. This much is understandable.

“What the international community has barely begun to discuss, however, is the other side of the dilemma: What should be the international standard on ensuring Internet accessibility and openness?

“The more compelling Internet story last week took place as far away from Europe as one can get. It was from Burma — via defiant blogs, emails, and phone-cam videos posted online — that the world witnessed the other argument: that when it comes to the Internet (and all forms of media, for that matter) ‘standards’ is a legitimate topic not only with respect to limiting the medium’s (and its users’) potential harm, but more importantly in setting and keeping the medium (and its users) free.”

Three Conversations on Intellectual Property: Fordham, University of St. Gallen, UOC (Catalunya)

Three recent conversations I’ve been part of offered a contrast in styles and views on intellectual property rights across the Atlantic. First, the Fordham International IP conference, which Prof. Hugh Hanson puts on each year (in New York, NY, USA); the terrific classes in Law and Economics of Intellectual Property that Prof. Urs Gasser teaches at our partner institution, the University of St. Gallen (in St. Gallen, Switzerland); and finally, today, the Third Congress on Internet, Law & Politics held by the Open University of Catalonia (in Barcelona, Spain), hosted by Raquel Xalabarder and her colleagues.

* * *

Fordham (1)

At Fordham, Jane Ginsburg of Columbia Law School moderated one of the panels. We were asked to talk about the future of copyright. One of the futures that she posited might come into being — and for which Fred von Lohmann and I were supposed to argue — was an increasingly consumer-oriented copyright regime, perhaps even one that is maximally consumer-focused.

– For starters, I am not sure that “consumer” maximalization is the way to think about it. The point is that it’s the group that used to be called the consumers who are now not just consumers but also creators. It’s the maximization of the rights of all creators, including re-creators, in addition to consumers (those who benefit, I suppose, from experiencing what is in the “public domain”). This case for a new, digitally-inspired balance has been made best by Prof. Lessig in Free Culture and by many others.

– What are the problems with what one might consider a maximalized consumer focus? The interesting and hardest part has to do with moral rights. Prof. Ginsburg is right: this is a very hard problem. I think that’s where the rub comes.

– The panel agreed on one thing: a fight over compulsory licensing is certainly coming. Most argued that the digital world, particularly a Web 2.0 digital world, will lead us toward some form of collective, non-exclusive licensing solution — if not a compulsory licensing scheme — will emerge over time.

– “Copyright will be a part of social policy. We will move away from seeing copyright as a form of property,” says Tilman Luder, head of copyright at the directorate general for internal markets at the competition division of the European Commission. At least, he says, that’s the trend in copyright policy in Europe.

* * *

Fordham (2)

I was also on the panel entitled “Unauthorized Use of Works on the Web: What Can be Done? What Should be Done?”

– The first point is that “unauthorized use of works” doesn’t seem quite the relevant frame. There are lots of unauthorized uses of works on the web that are perfectly lawful and present no issue at all: use of works not subject to copyright, re-use where an exception applies (fair use, implied license, the TEACH Act, e.g.s), and so forth. These uses are relevant to the discussion still, though: these are the types of uses that are

– In the narrower frame of unauthorized uses, I think there are a lot of things that can be done.

– The first and most important is to work toward a more accountable Internet. People who today are violating copyright and undermining the ability of creators to make a living off of their creative works need to change. Some of this might well be done in schools, through copyright-related education. The idea should be to put young people in the position of being a creator, so they can see the tensions involved: being the re-user of some works of others, and being the creator of new works, which others may in turn use.

– A second thing is continued work on licensing schemes. Creative Commons is extraordinary. We should invest more in it, build extensions to it, and support those who are extending it on a global level (including in Catalunya!).

– A third thing, along the lines of what Pat Aufderheide and Peter Jaszi are doing with filmmakers, is to establish best practices for industries that rely on ideas like fair use.

– A fourth thing is to consider giving more definition to the unarticulated rights — not the exclusive rights of authors that we well understand, but the rights of those who would re-use them, to exceptions and limitations.

– A fifth area, and likely the discussion that will dominate this panel, is to consider the role of intermediaries. This is a big issue, if not the key issue, in most issues that crop up across the Internet. Joel Reidenberg of Fordham Law School has written a great deal on this cluster of issues of control and liability and responsibility. The CDA Section 230 in the defamation context raises this issue as well. The question of course arose in the Napster, Aimster, and Grokster contexts. Don Verrilli and Alex Macgillivray argued this topic in the YouTube/Viacom context — the topic on which sparks most dramatically flew. They fought over whether Google was offering the “claim your content” technology to all comers or just to those with whom Google has deals (Verilli argued the latter, Macgillivray the former) and whether an intermediary could really know, in many instances, whether a work is subject to copyright without being told by the creators (Verilli said that wasn’t the issue in this case, Macgillivray says it’s exactly the issue, and you can’t tell in so many cases that DMCA 512 compliance should be the end of the story).

* * *

St. Gallen

Across the Atlantic, Prof. Dr. Urs Gasser and his teaching and research teams at the University of St. Gallen are having a parallel conversation. Urs is teaching a course on the Law and Economics of Intellectual Property to graduate students in law at St. Gallen. He kindly invited me to come teach with him and his colleague Prof. Dr. Bead Schmid last week.

– The copyright discussion took up many of the same topics that the Fordham panelists and audience members were struggling with. The classroom in Switzerland seemed to split between those who took a straight market-based view of the topics generally and those who came at it from a free culture perspective.

– I took away from this all-day class a sense that there’s quite a different set of experiences among Swiss graduate students , as compared to US graduate students, related to user-generated content and the creation of digital identity. The examples I used in a presentation of what Digital Natives mean for copyright looking ahead — Facebook, MySpace, LiveJournal, Flickr, YouTube, and so forth — didn’t particularly resonate. I should have expected this outcome, given the fact that these are not just US-based services, but also in English.

– The conversation focused instead on how to address the problem of copyright on the Internet looking forward. The group had read Benkler, Posner and Shavell in addition to a group of European writers on digital law and culture. One hard problem buried in the conversation: how much help can the traditional Law and Economics approach help in analyzing what to do with respect to copyright from a policy perspective? Generally, the group seeemed to believe that Law and Economics could help a great deal, on some levels, though 1) the different drivers that are pushing Internet-based creativity — other than straight economic gains — and 2) the extent to which peer-production prompts benefits in terms of innovation make it tricky to put together an Excel spreadsheet to analyze costs and benefits of a given regulation. I left that room thinking that a Word document might be more likely to work, with inputs from the spreadsheet.

* * *

Barcelona

The UOC is hosting its third Congres Internet i Politica: Noves Perspectives in Barcelona today. JZ is the keynoter, giving the latest version of The Future of the Internet — and How to Stop It. The speech just keeps getting better and better as the corresponding book nears publication. He’s worked in more from StopBadware and the OpenNet Initiative and a new slide on the pattern of Generativity near the end. If you haven’t heard the presentation in a while, you’ll be wowed anew when you do.

– Jordi Bosch, the Secretary-General of the Information Society of Catalonia, calls for respect for two systems: full copyright and open systems that build upon copyright.

Prof. Lilian Edwards of the University of Southhampton spoke on the ISP liability panel, along with Raquel Xalabarder and Miquel Peguera. Prof. Edwards talked about an empirical research project on the formerly-called BT Cleanfeed project. BT implements the IWF’s list of sites to be blocked, in her words a blacklist without a set appeals process. According to Prof. Edwards’ slides, the UK government “have made it plain that if all UK ISPs do not adopt ‘Cleanfeed’ by end 2007 then legislation will mandate it.” (She cites to Hansard, June 2006 and Gower Report.) She points to the problem that there’s no debate about the widespread implementation of this blacklist and no particular accountability for what’s on this blacklist and how it is implemented.

– Prof. Edwards’ story has big implications for not just copyright, but also the StopBadware (regarding block lists and how to run a fair and transparent appeals process) and ONI (regarding Internet filtering and how it works) research projects we’re working on. Prof. Edwards’ conclusion, though, was upbeat: the ISPs she’s interviewed had a clear sense of corporate social responsibility, which might map to helping to keep the Internet broadly open.

For much better coverage than mine, including photographs, scoot over to ICTology.