Internet and the United Nations

I spent a few recent plane flights reading Paul Kennedy’s The Parliament of Man: The Past, Present, and Future of the United Nations. It’s a fine history of the UN, worth reading to be sure. (I loved his book from the late 1980s, The Rise and Fall of the Great Powers.) Kennedy starts, but does not long linger, on the period leading up to Bretton Woods and San Francisco and other meetings, (i.e., the interesting but unsatisfying story of the League of Nations and what came before it). Most of the book, organized thematically (phew!) rather than chronologically, takes up the treatment by the UN of key issues like security, peacekeeping, and economic development.

What sets the book apart, for me, was the treatment of “other” topics, such as environment, children’s issues, and cultural issues (what he calls the “softer face” of the UN) and human rights. Kennedy is not uncritical in his treatment of the UN’s role in these areas, but he seems to see in these activities great importance and even greater promise: “… it is difficult to imagine how much more riven and ruinous our world of six billion people would be today had there been no UN social, environmental, and cultural agendas — and no institutions to attempt, sometimes well and sometimes poorly, to put them into practice on the ground. It is a mixed record, but it is hard to see how it could be otherwise.” (p. 176). Amen.

In the human rights context, Kennedy lauds the work of Mary Robinson (p. 197-8) and others in the human rights context, while noting the many tensions that lurk in the treatment of human rights in the various relevant charters and institutions of the UN. One of these tensions bears on an issue that we’ve been working on at the Berkman Center for some time. In our shared work on the OpenNet Initiative (with Toronto, Oxford, and Cambridge), and with other partners in the related context of corporate ethics (Berkeley, St. Gallen, CDT), we’ve been puzzling over the sovereignty of states and the rights of individuals to civil liberties. On the one hand, of course, the several dozen states that filter the Internet and practice online surveillance (we imagine) of their citizens and visitors have a right to regulate activity within the jurisdiction that they control. On the other hand, the UN and its member states, through a series of treaties, have set forth the understanding that there are certain rights that attach to any individual in the signatory states regardless of the (good or bad) decisions that those states might make to abridge those rights. Kennedy frames much of the chapter on human rights in this same context: “How are world citizens and their governments to reconcile universal human rights with claims for state sovereignty?”

As those who study the Internet and care about human rights, we haven’t made the case clearly yet for where the rights of free expression and privacy in the Internet context fit in this balance. Many of us no doubt have strong convictions about which side of the ledger filtering and surveillance fall on; others, I know, see the issue are tricky and nuanced. There’s a field emerging here with enormous significance. The ability of activists to rely upon the Internet in repressive regimes is but one of the important things that hangs in the balance. I suspect that there are many captains of industry at large technology companies that feel caught in a purgatory wrought by this tension.

The most notable thing to me about Kennedy’s book — through no fault of his, to be clear — is the extent to which Internet plays essentially no role in the story of the UN’s first 60 years. The word appears four times in the text if the index is to be believed, and after reading the whole thing, I believe the index maker to have been accurate. No doubt the ITU or WSIS or the UN ICT Task Force could have made it into the text (they didn’t), but lots of other significant activities were likewise left out, understandably.

For Kennedy, Internet seems to be about an alternative way to tell the world about news, (i.e., the next chapter in the trajectory that starts with radio then goes to TV — and now it’s the net). That’s one way to talk about it, I suppose. The most extensive treatment appears on page 236: “… a more in-depth investigation of the place of news and cultural communications in the evolution of international affairs would need to consider the pervasive and transnational nature of the Internet. Since it has grown so fast in the past decade, and its popularity is exploding in the giant states of India and China, it is extremely difficult to get a good measure of its many impacts; but it seems fair to remark that because this is a medium that can be used and abused by anyone with electricity and a computer, it may become less and less a Western-dominated instrument.” An understatement, to be sure; and I am not certain that Kennedy is thinking of states as the abusers, but rather individuals — though the sentence is ambiguous enough that maybe my reading is wrong.

I can’t imagine that the history of the UN in 2065, written by the next eminent historian and chairman of a blue-ribbon commission, will have so little to say about information and communications technologies and the UN’s role in our field, but maybe it will — and maybe, though I am not so sure, that would be a wonderful thing if it were to come to pass.

Sounds like fair use to me (and it should be, if it's not)

Ethan Zuckerman blogged Erin McKean’s talk at PopTech, reporting of the fear of some lexicographers that they will be sued for scanning some books to analyze language patterns. “This scanning shouldn’t be threatening to publishers. ‘I don’t care about your plot, or your ideas – I just want to analyze your use of the language.’ It should be considered fair use… ‘but this is America – anyone can sue anyone for anything.’ And just the threat of a lawsuit is enough to prevent lexicographers from analysing some texts.”

EZ goes on: “She begs us to make changes to the copyright pages of our books so that lexicographers have the explicit right to analyze them. (I’ll be putting the idea in front of Larry Lessig, to see if this can be yet another selling point for Creative Commons.)”

Armstrong: Digital Natives, beware…

Tim Armstrong, former Berkman fellow and now a prof at the U of C, writes: “… the permanence of networked information has costs, too, which (like the benefits) are only beginning to be explored. Members of the generation just behind mine, who have grown up reflexively creating and posting information online, are learning that digital is forever — if you’re a job applicant (or even a camp counselor), anything that has ever been written by (or about) you online is, at least potentially, still there. (Back in my day, we used goofy aliases to hide our online identities; but I gather that practice has been fading.) Once information is online, it turns out, it may becomes quite hard ever to get it back offline again — the Wayback Machine preserves old web pages; Google Groups archives Usenet posts; and it’s only a matter of time before somebody comes up with the magic bullet that automatically archives IRC and IM conversations and makes them searchable. Even your deleted e-mails aren’t necessarily gone; they may still exist on backup tapes where law enforcement authorities can get them. The durability of digital content raises problems that touch on both informational security and individual privacy.”

Lessig: What YouTube teaches us about Net Neutrality

Lawrence Lessig has an op-ed in the FT today. He uses the YouTube story to make utterly plain why we should care about the outcome of the Net Neutrality debate — competition, access, innovation, creativity, just for starters. He writes:

“YouTube could beat Google because the internet provided a level playing field. The owners of pipes delivering video content to users on the internet did not prefer one service over the other. The owners of pipes simply passed the packets of data to users as the users chose. No doubt Google and YouTube worked to make that content flow as fast as possible by buying caching servers and fast connections. But once it was on the internet, the network owner showed no preference, serving each competitor equally.

“Network owners now want to change this by charging companies different rates to get access to a “premium” internet. YouTube, or blip.tv, would have to pay a special fee for their content to flow efficiently to customers. If they do not pay this special fee, their content would be relegated to the “public” internet – a slower and less reliable network. The network owners would begin to pick which content (and, in principle, applications) would flow quickly and which would not.

“If America lived in a world of real competition among broadband providers, there would be little reason to worry about such deals. But it does not live in that world. …” Read on!

Special Copyright Podcast

The Berkman Center’s increasingly terrific new media production team has rolled together this special-edition podcast on copyright in the context of teaching and learning. It’s an extension of the work done on the Digital Learning Challenge, led by Prof. Terry Fisher (the first voice you hear on the podcast) and former Berkman fellow, now Prof. William McGeveran, and funded by the Mellon Foundation. The theme of uncertainty in the digital copyright realm is particularly real in the context of using works in teaching and research, despite all manner of reasons why we wouldn’t want that to be so.

Negative campaigning, one step (way) too far

Deval Patrick, the front-running candidate for governor of Massachusetts (and my preferred candidate), sent out a blast e-mail just now that detailed a nightmare that his sister went through with her husband. Mr. Patrick argues that his opponent, Lieutenant Governor Kerry Healey, has made this issue a public one in the context of the gubernatorial campaign.

“My sister and her husband went through a difficult time, and through hard work and prayer, they repaired their relationship and their lives. Now they and their children — who knew nothing of this — have had their family history laid out on the pages of a newspaper. Why? For no other reason than that they had the bad luck to have a relative who is running for governor. It’s pathetic and it’s wrong. By no rules of common decency should their private struggles become a public issue.”

If true, I couldn’t agree more.  (Healey says it’s not true.)  Somebody, no doubt antagonistic to Mr. Patrick, leaked this story.  The general point still stands: there are already too many disincentives to entering public life in America, particularly through the electoral process.

As a related matter, the Lieutenant Governor has made an issue of the fact that Mr. Patrick’s running mate, Worcester Mayor Tim Murray, defended acccused sex offenders as a defense attorney. As the AP reported, “On Friday, Healey opened a fresh line of attack, criticizing Worcester Mayor Tim Murray, the Democratic nominee for lieutenant governor and Patrick’s running mate, for handling appeals of people challenging their classification by the Sex Offender Registry Board. Murray is a defense attorney, but he said he took some of the cases at the request of the court. ‘I know that the court needs people to take these cases, and that it’s part of our adversarial system,’ Healey said. ‘The question is, simply, ‘Is that the priority that you want to have your next governor and lieutenant governor to have?'”

The Lieutenant Governor’s posture on this issue is almost as maddening as the possible leak of a Patrick family matter. Should helping people defend their Consitutional rights, whether or not they are guilty, disqualify someone from holding state office? Again, what an irresponsible assault on what it means to be a public servant.

Making a Market Emerge out of Digital Copyright Uncertainty

The digital copyright issue is one of the sidebars related to the Google/YouTube transaction that has merited a fair amount of digital ink.

(For a few examples: don’t miss Fred von Lohmann as interviewed by John Battelle. Declan McCullagh and Anne Broache have an extensive piece highlighting the continuing uncertainty in the digital copyright space and quoting experts like Jessica Litman. Steve Ballmer brings it up in his BusinessWeek interview on the deal, asking, “And what about the rights holders?” And the enormously clever Daniel Hausermann has an amusing take on his new blog.)

My view (in large measure reflected in the WSJ here, in a discussion with Prof. Stan Liebowitz) is that Google is taking on some, but not all that much, copyright risk in its acquisition of YouTube. Google has already proven its mettle in terms of offering services that bring with them a reasonably high appetite for copyright risk: witness the lawsuits filed by the likes of the publishing industry at large; the pornographer Perfect 10; and Agence France Presse. There’s no doubt that Google will have to respond to challenges on both secondary copyright liability and direct copyright liability as a result of this acquisition. If they are diligent and follow the advice of their (truly) brilliant legal team, I think Google should be able to withstand these challenges as a matter of law.

The issue that pops back out the other side of this flurry of interest in the broader question of the continued uncertainty with respect to digital copyright. Despite what I happen to consider a reasonably good case in Google’s favor on these particular facts (so far as I know them), there is an extraordinary amount of uncertainty as a general matter on digital copyright issues in general. Mark Cuban’s couple of posts on this topic are particularly worth reading; there are dozens of others.

Many business models in the Web 2.0 industry in particular hinge on the outcome of this uncertainty. A VC has long written about “the rights issues” at the core of many businesses that are built, or will be built, on what may be the sand — or what may turn out to be a sound foundation — of “micro-chunked” content. Lawrence Lessig has written the most definitive work on this topic, especially in the form of his book, Free Culture. The RSS-and-copyright debate is one additional angle on this topic. Creative Commons licenses can help to clarify the rights associated with micro-chunked works embedded in, or syndicated via, RSS feeds.

Part of the answer could come from the courts and the legislatures of the world. But I’m not holding my breath. A large number of lawsuits in the music and movies context has left us clearer in terms of our understanding of the rules around file-sharing, but not enough clarity such that the next generation of issues (including those to which YouTube and other web 2.0 applications give rise) is well-sorted.

Another part of the answer to this digital copyright issue might be provided by the market. One might imagine a process by which citizens who create user-generated content (think of a single YouTube video file or a syndicated vlog series, a podcast audio file or series of podcasts, a single online essay or a syndicated blog, a photo covering the perfectly captures a breaking news story or a series of evocative images, and so forth) might consistently adopt a default license (one of the CC licenses, or an “interoperable” license that enables another form of commercial distribution; I am persuaded that as much interoperability of licenses as possible is essential here) for all content that they create, with the ability also to adopt a separate license for an individual work that they may create in the future.

In addition to choosing this license (or these licenses) for their work, these users registered this work or these works, with licenses attached, in a central repository. Those who wished to reproduce these works would be on notice to check this repository, ideally through a very simple interface (possibly “machine-readable” as well as “human-readable” and “lawyer-readable,” to use the CC language), to determine the terms on which the creator is willing to enable the work to be reproduced (though not affecting in any way the fair use, implied license, or other grounds via which the works might otherwise be reproduced).

Some benefits of such a system:

– It would not affect the existing rights of copyright holders (or the public, for that matter, on the other side of the copyright bargain), but rather ride on top of that system (which might have the ancillary benefit of eventually permitting a global market to emerge, if licenses can be transposed effectively);

– It would allow those who wish to clarify the terms on which they are willing to have their works reproduced to do so in a default manner (i.e., “unless I say otherwise, it’s BY-SA”) but also to carve out some specific works for separate treatment (i.e., “… but for this picture, I am retaining all rights”);

– It might provide a mechanism, supplemental to CC licenses, for handshakes to take place online without lawyers involved;

– It might be coupled with a marketplace for automated licensing — and possibly clearance services — from creators to those who wish to reproduce the works;

– It could be adopted on top of (and in a complementary manner with respect to) other systems, not just the copyright system at large as well as worthy services/aggregators of web 2.0 content, ranging from YouTube, software providers like SixApart, FeedBurner, Federated Media, Brad Feld’s posse of VCs, and so forth; and,

– It would represent a community-oriented creation of a market, which ultimately could support the development of a global market for both sharing and selling of user-generated content.

This system would not have much bearing on the Google/YouTube situation, but it might serve a key role in the development of web 2.0, or of user-generated content in general, and to help avoid a copyright trainwreck.

Curricular Reform at Harvard Law School

Last week, Harvard Law School adopted substantial changes to its first-year curriculum. The office announcement is here.

These changes are important for several reasons. On the simplest level, these changes are the first adjustments to the much-vaunted HLS first-year curriculum in over one hundred years, as the NYTimes’s Jonathan Glater pointed out in his story. The 19th century design of this curriculum has served many of us — students, lawyers, law teachers, maybe even society at large — very well. But the practice of law has changed enormously over that century-plus; well-reasoned change, reflecting those changes in practice, seems much in order as a general matter.

These particular curricular reforms happen also to be terrific choices. A process led by Professor Martha Minow over a few years, including a massive consultative process, led to the proposal that passed the faculty unanimously — a sure sign that the proposal was well-crafted. (If you are unfamiliar with the history of the Harvard Law School’s faculty, the point about unaminity may seem unremarkable. But it is remarkable, truly; a testament to the leadership of both our dean, Elena Kagan, and of Prof. Minow.) The three major changes to the curriculum are that students will take a course in legislation and regulation; one of a few choices in international law; and a course on legal problem solving. These changes mean that there will inevitably be less emphasis in the first year on the traditional slate of courses (torts, contracts, civil procedure, and so forth), but the basic structure that has worked so well over time has been preserved. One big scheduling change for HLS first-years is that they will have an intensive winter-term course, just as the second- and third-year students already do. The winter term idea is a great one, as this is an institution that allows for a different, and differently effective, mode of teaching some courses. Students take only one class during January, which meets every day, and they focus solely on this one subject. Taken together, these changes are geared toward ensuring that law students are better prepared for the profession into which they will enter, whether as practicing lawyers in a firm, public servants of various sorts, or businesspeople in a global economy.

On the occasion of the unanimous faculty vote, Dean Kagan wrote: “This marks a major step forward in our efforts to develop a law school curriculum for the 21st century. Over 100 years ago, Harvard Law School invented the basic law school curriculum, and we are now making the most significant revisions to it since that time. Thanks to yesterday’s unanimous faculty vote, we will add new first-year courses in international and comparative law, legislation and regulation, and complex problem solving — areas of great and ever-growing importance in today’s world. I am extraordinarily grateful to the entire faculty for its vision and support of these far-reaching reforms, which I am confident will give our students the best possible training for the leadership positions they will soon occupy.”

(Volokh Conspiracy, by contrast, has less positive things, or perhaps just more skeptical things, to say.)

As a variant on the same theme, several of us at the Berkman Center for Internet & Society at Harvard Law School are looking at the question of whether, and how, technology should be factored into the law school curriculum more so than it is today at HLS and many other schools. Over the course of this fall, we’re working with partners at Lexis-Nexis on a survey of lawyers and a white paper on ways that technology might appropriately be used in the teaching of law. The project is being spearheaded by new Berkman fellow Gene Koo. While on a much smaller scale than the curriculum reform just passed at HLS, this research project is intended to be in step with the hard look at whether law teaching today prepares students well for the practice of law.

As a footnote: the Harvard Crimson notes that the unanimous vote of our faculty in favor of this broad first-year curricular reform is good news for those hoping that Dean Kagan (of Harvard Law School) will become President Kagan (of Harvard University). I agree with Professor Elhauge, who says, “I hope we don’t lose her to the university. But I don’t think they could find anyone better to be President.”