Henry N. Ess III Chair Lecture Notes

I’m preparing for a lecture tonight at Harvard Law School.  Here’s the abstract:

The Path of Legal Information

November 9, 2010

I propose a path toward a new legal information environment that is predominantly digital in nature.  This new era grows out of a long history of growth and change in the publishing of legal information over more than nine hundred years years, from the early manuscripts at the roots of English common law in the reign of the Angevin King Henry II; through the early printed treatises of Littleton and Coke in the fifteenth, sixteenth, and seventeenth centuries, (including those in the extraordinary collection of Henry N. Ess III); to the systemic improvements introduced by Blackstone in the late eighteenth century; to the modern period, ushered in by Langdell and West at the end of the nineteenth century.  Now, we are embarking upon an equally ambitious venture to remake the legal information environment for the twenty-first century, in the digital era.

We should learn from advances in cloud computing, the digital naming systems, and youth media practices, as well as classical modes of librarianship, as we envision – and, together, build – a new system for recording, indexing, writing about, and teaching what we mean by the law.  A new legal information environment, drawing comprehensively from contemporary technology, can improve access to justice by the traditionally disadvantaged, including persons with disabilities; enhance democracy; promote innovation and creativity in scholarship and teaching; and promote economic development.  This new legal information architecture must be grounded in a reconceptualization of the public sector’s role and draw in private parties, such as Google, Amazon, Westlaw, and LexisNexis, as key intermediaries to legal information.

This new information environment will have unintended – and sometimes negative – consequences, too.  This trajectory toward openness is likely to change the way that both professionals and the public view the law and the process of lawmaking.  Hierarchies between those with specialized knowledge and power and those without will continue its erosion.  Lawyers will have to rely upon an increasingly broad range of skills, rather than serving as gatekeepers to information, to command high wages, just as new gatekeepers emerge to play increasingly important roles in the legal process.  The widespread availability of well-indexed digital copies of legal work-products will also affect the ways in which lawmakers of all types think and speak in ways that are hard to anticipate.  One indirect effect of these changes, for instance, may be a greater receptivity on the part of lawmakers to calls for substantive information privacy rules for individuals in a digital age.

An effective new system will not emerge on its own; the digital environment, like the physical, is a built environment.  As lawyers, teachers, researchers, and librarians, we share an interest in the way in which legal information is created, stored, accessed, manipulated, and preserved over the long term.  We will have to work together to overcome several stumbling blocks, such as state-level assertions of copyright.  As collaborators, we could design and develop it together over the next decade or so.  The net result — if we get it right — will be improvements in the way we teach and learn about the law and how the system of justice functions.

Eszter Hargittai on Digital Na(t)ives

We have the great pleasure today at the Berkman Center of hearing from Eszter Hargittai, a prof at Northwestern, on her large-scale research project on how 18 / 19-year-olds use digital technologies. She’s also worked on problems related to what she calls the “second-level digital divide” over the past decade or so. She surveyed over 1000 students at the UIC, one of the most diverse research universities.

A set of important take-aways: she’s found a correlation between gender and the likelihood of creating and sharing digital content (women were less likely to share content online that they’ve created than men). But it turns out that skill level is actually the relevant factor, not gender: if you correct for skill-level, the gender difference goes away. She is also trying to figure out what these gaps mean in terms of opportunities for life chances.

Her research hones in on the fact that what matters are skill differences, not just technology access differences, when it comes to digital inequality. We need to provide training and education for kids in addition to access to the network. These findings — good news for her — are consistent with Eszter’s extensive body of work to date. And she’s plainly right. (This is much of what Urs Gasser and I are arguing in our book, Born Digital; we have to figure out how to say it half as elegantly as Eszter does.)

Eszter has an article coming out very soon, in a volume co-edited by danah boyd and Nicole Ellison, which makes a related set of claims. Her data inform the question of who uses social-networking sites (SNS). Women, she finds, are more likely to use SNSes than men (other than in the context of Xanga, where the numbers are reversed). People of with parents with lower academic backgrounds (which apparently correlates to lower social-economic status, (SES), backgrounds) are more likely to be MySpace users, and those with parents with higher educational backgrounds are more likely to use Facebook. (These data lead to conclusions much like what danah boyd claimed recently, and which kicked up a bit of a storm. See the 297 comments on danah’s blog.)

If you missed Eszter’s talk, it’s worth catching it online at MediaBerkman.

(Separately: she’s also got thoughtful comments on her blog about our pending Cookie Crumbles video contest.)

Throwing Code Over the Wall to Non-Profits

Total blue sky, inspired in part by a wonderful gathering pulled together by Jake Shapiro at PRX and Vince Stehle at the Surdna Foundation, picking up on thoughts from various contexts:

If I could start (or otherwise will into existence) any non-profit right now, what it would do is to develop and apply code for non-profit organizations that are under-using new information technologies for core communications purposes. The organization would be comprised primarily of smart, committed, young coders and project managers, primarily, who know how to take open source and other web 2.0-type tools and apply them to connect to communities of interest. (Perhaps some coders would volunteer, too, on a moonlighting basis.)

There are a bunch of problems it would be designed to solve. There are lots of non-profit organizations, such as public media organizations or local initiative campaigns or NGOs in fields like human rights, for instance, that would like to leverage new technologies in the public interest — to reach new audiences for their work and to build communities around ideas — but have no clue as to how to go about doing it.

I think the stars are aligned for such a non-profit to make a big difference at this moment of wild technological innovation. There are lots of relevant pieces that are ready to be put together. Ning and many others have developed platforms that could be leveraged. SourceForge has endless tools for the taking and applying to solve problems. Blogs, wikis, social networks (think of the Facebook open API), and Second Life (or whatever you’d like to experiment with in the participatory media space) are also easy to put to work, if you know how. Most small organizations know that Digital Natives (and many others) are spending lots of their lives online. There are others who do things like this — consider the wonderful Tactical Tech in the global environment, as well as those who do development for political campaigns, like Blue State Digital — whose learning might be leveraged here. There is plenty of “pain in the marketplace,” as venture guys might say. There are smart coders coming out of schools who want to do well enough by doing good in a mission-driven organization (think of the geekiest members of the Free Culture movement). The goal would be to take these technologies and making them work for carefully targeted customers in the non-profit space.

The non-profit would require a reasonable pile of start-up capital to get set up and to have ballast for lean times, but it would have a revenue model. It would charge for its services, on an overall break-even basis. It would not develop things for free; it would develop things for cheap(er) and with real expertise for non-profits that need access to the technologies. (One could imagine a sliding scale based upon resources and revenue and so forth.) It would also have a training services arm. Clients would be required to pay for some training, too, so that the organization would have an internal capacity to keep up the tool that’s developed for them.

I could imagine it loosely based in a big, open, low-rent space in Central Square in Cambridge, right between MIT and Harvard, with collaborators around the world. I suspect there are others doing something like this, but I am constantly surprised by the number of times I am at meetings or conferences where prospective customers tell me they don’t have a provider for their needs.

Three Conversations on Intellectual Property: Fordham, University of St. Gallen, UOC (Catalunya)

Three recent conversations I’ve been part of offered a contrast in styles and views on intellectual property rights across the Atlantic. First, the Fordham International IP conference, which Prof. Hugh Hanson puts on each year (in New York, NY, USA); the terrific classes in Law and Economics of Intellectual Property that Prof. Urs Gasser teaches at our partner institution, the University of St. Gallen (in St. Gallen, Switzerland); and finally, today, the Third Congress on Internet, Law & Politics held by the Open University of Catalonia (in Barcelona, Spain), hosted by Raquel Xalabarder and her colleagues.

* * *

Fordham (1)

At Fordham, Jane Ginsburg of Columbia Law School moderated one of the panels. We were asked to talk about the future of copyright. One of the futures that she posited might come into being — and for which Fred von Lohmann and I were supposed to argue — was an increasingly consumer-oriented copyright regime, perhaps even one that is maximally consumer-focused.

– For starters, I am not sure that “consumer” maximalization is the way to think about it. The point is that it’s the group that used to be called the consumers who are now not just consumers but also creators. It’s the maximization of the rights of all creators, including re-creators, in addition to consumers (those who benefit, I suppose, from experiencing what is in the “public domain”). This case for a new, digitally-inspired balance has been made best by Prof. Lessig in Free Culture and by many others.

– What are the problems with what one might consider a maximalized consumer focus? The interesting and hardest part has to do with moral rights. Prof. Ginsburg is right: this is a very hard problem. I think that’s where the rub comes.

– The panel agreed on one thing: a fight over compulsory licensing is certainly coming. Most argued that the digital world, particularly a Web 2.0 digital world, will lead us toward some form of collective, non-exclusive licensing solution — if not a compulsory licensing scheme — will emerge over time.

– “Copyright will be a part of social policy. We will move away from seeing copyright as a form of property,” says Tilman Luder, head of copyright at the directorate general for internal markets at the competition division of the European Commission. At least, he says, that’s the trend in copyright policy in Europe.

* * *

Fordham (2)

I was also on the panel entitled “Unauthorized Use of Works on the Web: What Can be Done? What Should be Done?”

– The first point is that “unauthorized use of works” doesn’t seem quite the relevant frame. There are lots of unauthorized uses of works on the web that are perfectly lawful and present no issue at all: use of works not subject to copyright, re-use where an exception applies (fair use, implied license, the TEACH Act, e.g.s), and so forth. These uses are relevant to the discussion still, though: these are the types of uses that are

– In the narrower frame of unauthorized uses, I think there are a lot of things that can be done.

– The first and most important is to work toward a more accountable Internet. People who today are violating copyright and undermining the ability of creators to make a living off of their creative works need to change. Some of this might well be done in schools, through copyright-related education. The idea should be to put young people in the position of being a creator, so they can see the tensions involved: being the re-user of some works of others, and being the creator of new works, which others may in turn use.

– A second thing is continued work on licensing schemes. Creative Commons is extraordinary. We should invest more in it, build extensions to it, and support those who are extending it on a global level (including in Catalunya!).

– A third thing, along the lines of what Pat Aufderheide and Peter Jaszi are doing with filmmakers, is to establish best practices for industries that rely on ideas like fair use.

– A fourth thing is to consider giving more definition to the unarticulated rights — not the exclusive rights of authors that we well understand, but the rights of those who would re-use them, to exceptions and limitations.

– A fifth area, and likely the discussion that will dominate this panel, is to consider the role of intermediaries. This is a big issue, if not the key issue, in most issues that crop up across the Internet. Joel Reidenberg of Fordham Law School has written a great deal on this cluster of issues of control and liability and responsibility. The CDA Section 230 in the defamation context raises this issue as well. The question of course arose in the Napster, Aimster, and Grokster contexts. Don Verrilli and Alex Macgillivray argued this topic in the YouTube/Viacom context — the topic on which sparks most dramatically flew. They fought over whether Google was offering the “claim your content” technology to all comers or just to those with whom Google has deals (Verilli argued the latter, Macgillivray the former) and whether an intermediary could really know, in many instances, whether a work is subject to copyright without being told by the creators (Verilli said that wasn’t the issue in this case, Macgillivray says it’s exactly the issue, and you can’t tell in so many cases that DMCA 512 compliance should be the end of the story).

* * *

St. Gallen

Across the Atlantic, Prof. Dr. Urs Gasser and his teaching and research teams at the University of St. Gallen are having a parallel conversation. Urs is teaching a course on the Law and Economics of Intellectual Property to graduate students in law at St. Gallen. He kindly invited me to come teach with him and his colleague Prof. Dr. Bead Schmid last week.

– The copyright discussion took up many of the same topics that the Fordham panelists and audience members were struggling with. The classroom in Switzerland seemed to split between those who took a straight market-based view of the topics generally and those who came at it from a free culture perspective.

– I took away from this all-day class a sense that there’s quite a different set of experiences among Swiss graduate students , as compared to US graduate students, related to user-generated content and the creation of digital identity. The examples I used in a presentation of what Digital Natives mean for copyright looking ahead — Facebook, MySpace, LiveJournal, Flickr, YouTube, and so forth — didn’t particularly resonate. I should have expected this outcome, given the fact that these are not just US-based services, but also in English.

– The conversation focused instead on how to address the problem of copyright on the Internet looking forward. The group had read Benkler, Posner and Shavell in addition to a group of European writers on digital law and culture. One hard problem buried in the conversation: how much help can the traditional Law and Economics approach help in analyzing what to do with respect to copyright from a policy perspective? Generally, the group seeemed to believe that Law and Economics could help a great deal, on some levels, though 1) the different drivers that are pushing Internet-based creativity — other than straight economic gains — and 2) the extent to which peer-production prompts benefits in terms of innovation make it tricky to put together an Excel spreadsheet to analyze costs and benefits of a given regulation. I left that room thinking that a Word document might be more likely to work, with inputs from the spreadsheet.

* * *

Barcelona

The UOC is hosting its third Congres Internet i Politica: Noves Perspectives in Barcelona today. JZ is the keynoter, giving the latest version of The Future of the Internet — and How to Stop It. The speech just keeps getting better and better as the corresponding book nears publication. He’s worked in more from StopBadware and the OpenNet Initiative and a new slide on the pattern of Generativity near the end. If you haven’t heard the presentation in a while, you’ll be wowed anew when you do.

– Jordi Bosch, the Secretary-General of the Information Society of Catalonia, calls for respect for two systems: full copyright and open systems that build upon copyright.

Prof. Lilian Edwards of the University of Southhampton spoke on the ISP liability panel, along with Raquel Xalabarder and Miquel Peguera. Prof. Edwards talked about an empirical research project on the formerly-called BT Cleanfeed project. BT implements the IWF’s list of sites to be blocked, in her words a blacklist without a set appeals process. According to Prof. Edwards’ slides, the UK government “have made it plain that if all UK ISPs do not adopt ‘Cleanfeed’ by end 2007 then legislation will mandate it.” (She cites to Hansard, June 2006 and Gower Report.) She points to the problem that there’s no debate about the widespread implementation of this blacklist and no particular accountability for what’s on this blacklist and how it is implemented.

– Prof. Edwards’ story has big implications for not just copyright, but also the StopBadware (regarding block lists and how to run a fair and transparent appeals process) and ONI (regarding Internet filtering and how it works) research projects we’re working on. Prof. Edwards’ conclusion, though, was upbeat: the ISPs she’s interviewed had a clear sense of corporate social responsibility, which might map to helping to keep the Internet broadly open.

For much better coverage than mine, including photographs, scoot over to ICTology.

Interview with Urs Gasser

The Berkman communications team has been conducting a series of interviews with our fellows. The interviews are written up and posted to the Berkman website. The most recent interview is with Prof. Dr. Urs Gasser, a faculty fellow and the director of a research center at the University of St. Gallen. His center — along with a few others, like the OII, the Citizen Lab in Toronto, Dan Gillmor’s citizens’ media center — has become one of the key international partners to the Berkman Center in carrying out our mission.

An excerpt from the interview:

“Q: Have European markets taken a different approach than the U.S. towards regulating digital copyright? Is there an attempt being made to approach digital rights issues from a global perspective as opposed to a nation/market-specific point of view?

“Urs: Painted in broad brushes, it is fair to say that the U.S. and European copyright frameworks follow similar approaches as far as digital rights issues are concerned. This doesn’t come as a big surprise, since important areas such as, for instance, the legal protection of technological protection measures have been addressed at the level of international law – e.g. in the context of the WIPO Internet Treaties. However, the closer you look, the more differences among the legal systems you will find, even within Europe, where copyright laws and consumer protection laws, to name just two important areas, vary significantly if you move from – say – Germany to the U.K. as our Berkman/St. Gallen studies have demonstrated. But from the “big picture perspective” you are certainly right, there is a global trend towards convergence of digital copyright law, driven especially by TRIPS and the WIPO treaties, but also (and equally important) by bilateral free trade agreements.”

For more on Urs’ center and his colleagues, check out the Research Center for Information Law at the University of St. Gallen (I am proudly a member of its Board), as well Daniel Hausermann’s blog.

Celebrating Those Who Blog the Vote

This afternoon, we’re welcoming all those who are covering the 2006 Massachusetts campaign cycle to a reception in your honor at the Berkman Center for Internet & Society at Harvard Law School. The reception, totally informal, will run from probably 5 – 6:30 p.m. or so at 23 Everett Street, Cambridge, MA. No matter if you’re for Healey/Hillman or Patrick/Murray, or if you want yes or no on 1, 2, or 3, of if you’re still undecided, please join us!   To contact the Berkman Center, click here.

Good tools in more languages = great

Bravo to Google for making its Gmail service accessible via an Arabic and a Hebrew interface (via Khaled).  Fun to hear it, too, directly from one of the engineers.
This translation step, which puts Gmail at 40 languages, is so essential to the use of Internet in a way that will improve lives generally, enhance productivity, promote cross-cultural understanding, and positively affect democracies. It makes me cringe, the extent to which English is the lingua franca of the web.

Computing and education

I’m in the computer room at a grand old hotel in New Paltz, NY, the Mohonk Mountain House, fretting about what to say to a group of school business managers gathered here under the banner of the NYSAIS. I’m here to talk about computing and education. (At the Berkman Center, this topic is one of our three core thematic areas of inquiry, along with Internet & content issues like IP and Internet & democracy. Charlie Nesson, JZ, and Colin Maclay do a much better job than I do in keeping this issue in the foreground of our work.)

The best part about attending a similar event last Fall was meeting several inspiring and insightful teachers. Some of them not only blog themselves, but think hard and well about computing and teaching. One of those teachers is Arvind Grover, whose blog I was scanning by way of research for some of those inspired thoughts I recall him having. For one, he thinks that “We need to be training our students to be problems solvers, not fact-repeaters. I advocate for computer science starting lower school and going all the way through college. The effect of technology on the world has been dramatic and it continues. … If your school does not have a computer science program, you must ask yourself why not? If your school does have a computer science program, you must ask yourself is it the right one?” He refers us to a ComputerWorld article on the future of computer science.

I agree. But I’m also puzzling over another, related question. If you are teaching today’s Digital Natives but not using technology to do so, why not? And if you are, what’s your purpose in doing so? You may well have a good reason NOT to use computing in any way in the teaching process. A professor at Harvard Law School, Elizabeth Warren, makes a compelling case about how she teaches using the Socratic method and the extent to which that method is about a highly focused, person-to-person exchange in the classroom (and associated benefits to onlookers who are not looking at IMs and smirking about what someone just sent them). Absent a specific pedagogical reason of this sort — and there are many — I think any educator, at any level, has to ask themselves if they are in fact engaging students in the digital environment in which a large percentage of their students immerse themselves. It does not mean everyone has to teach computing, or the law of computing, or some off-shoot of it. But I do think that it’s becoming increasingly important to join the issue in schools of all levels. What is your strategy for using computing as part of the teaching and learning process? If you ignore computing, are you effectively preparing your students for where they head next? Are you engaging them where they are right now? Are you, and your students, contributing to the emerging digital commons of shared knowledge? And are you making the most of your community’s digital identity? Charlie Nesson asks, “What’s your cyberstrategy?”  The answer might be no, or I don’t have one, or I don’t care, but failing to ask the questions strikes me as the big potential mistake.