Paths Not Taken

EZ’s blog is always worth reading, but I found today’s post about the SDP particularly touching and revealing. He writes:

“I will admit, I still find something a bit disorienting about trying to advise PhD students. It’s become increasingly clear to me that I won’t be able to convince myself to return to school and complete a degree any more advanced than my BA. I find myself wondering, as I sit down to offer suggestions to soon-to-be-doctorate-holders whether I should preface my comments with, ‘You probably shouldn’t listen to a word that I’m saying, as I’ve never attempted to get research past an advisory committee, never structured a dissertation, and have almost no academic publications to my name.’ I’m perpetually thankful that Berkman creates an academic environment where these issues almost never surface, but there’s nothing like a building filled with smart, young doctoral students to make one wonder about one’s own academic path not taken.”

SDPers, read EZ’s blog, but don’t be fooled by this paragraph. I can’t think of a more misleading preface to a group of (clearly wonderful) mid-stream graduate students in Internet-related studies; I trust he didn’t do it. If anything, I think we should all pay particular attention to EZ in the academic environment. His work, to me, is proof-positive that there’s little or no correlation between the number of years spent in graduate school and the quality of academic insight, at least in our field. It’s not to say that a doctorate of whatever flavor isn’t worth doing; it is, in many many cases. But EZ’s career is one to examine, and his path taken one to consider, if you have that kind of talent.

(I just wait with bated breath for that book you’re writing, Ethan.)

Sunshine Hillygus on Internet and Campaigns

Prof. Sunshine Hillygus is presenting about her study of the persuadable voter here at SDP 2007. She has a book coming out with Princeton University Press shortly on her research. I asked her what the most surprising/biggest finding of her book is. She said that she is trying to get away from the question of “do campaigns matter?” to a more nuanced view of how the various actors (including voters and the candidates) are using new information in such a way that they change their minds, and one another’s minds, over the course of a campaign. She also alluded to the conclusion of the book, in which she is “sounding the alarm” about the hyper-targeting of voters based on the aggregation of new data elements and the used of these data to target individual voters in ways that raise privacy issues. I am eager to read the book!

Internet Filtering Session at the SDP 2007

This morning — at the Summer Doctoral Program in Cambridge, MA — we’re taking up the topic of Internet filtering and the work of the ONI (and what we’ve written about in our forthcoming book from MIT Press, called Access Denied). Some of the questions that students raised about the topic and after reading our work on it:

– One student says that her dad read a copy of Dr. Zhivago, censored at the time in his country, where each page was accessible to him only as a photograph. One of her points, I think, is that history repeats itself and we should understand how this story is a repeat and where it is new and different than previous stories of censorship. One student suggests, as a follow-up: let’s test the hypothesis that the Internet is revolutionary. A second of her points, I take it, is that people will figure ways around censorship in clever ways.

– How do you measure filtering of the Internet and then analyze what you’ve learned in a way that informs decision-making?

– How do you measure the impact of filtering on access to knowledge?

– Do we need to have ISPs that act like common carrier who do not ever filter?

– What is the role of large countries as neighbors to smaller countries, raised by the possibility of in-stream filtering?

– What is the role of the commercial filtering providers?

– How can we determine whether the practice of Internet filtering violates a universal right to access information?

– How can we study how copyright and trademark owners carry out filtering?

– Is there legitimate filtering? (A student posits: there is legitimate filtering, including via search engine. This concept invokes what Urs Gasser blogged about, provocatively, at the ONI conference about “best practices in Internet filtering.”)

– How do we study the circumvention piece and include it in our story? What about developing the tools of circumvention?

– How do you overlay cultural differences on this survey?

– To what extent does control of communications facilitate control of other institutions, tools, or otherwise? To what extent is control of communications a priority for a given authority?

– When does one state have the right and/or ability to influence what another state does in this domain?

See Daithi and Ismael for more, better than what I’ve posted here.

OpenNet Initiative Conference, Study Release This Week

We’re gearing up this week to host our first big Internet filtering conference this week, which is already oversubscribed. The event is taking place in Oxford, England, hosted by our partners at the Oxford Internet Institute, in cooperation with our other partners at the University of Toronto’s Citizen Lab and the University of Cambridge’s Advanced Network Research Group at the Cambridge Security Programme. At this event, we will release the full set of data from the first-ever global survey of Internet filtering. In many ways, this release is the culmination of five years of work, since the ONI partners began testing for Internet filtering back in about 2002. The work is thanks to a number of grants, most notably a $3 million grant to ONI from the MacArthur Foundation, as well as key gifts from OSI, IDRC, the Ford Foundation, and others.

Feel free to add a question for discussion to the online question tool.

An even more complete version of this story, including chapters that set the data in context, will appear in our book, Access Denied: The Practice and Policy of Internet Politics, will be released this fall by MIT Press.

John Mayer of CALI at Berkman

The executive director of The Center for Computer-Assisted Legal Instruction (CALI), John Mayer, is a totally wonderful guy. He’s funny and smart and cares about cool technologies and access to justice — all good things. That’s especially good news for us, since he’s giving the Berkman Center luncheon series talk today. If you’re familiar with CALI, you know what an amazing resource he and his colleagues have created for law students and those who teach them. If you’re not, it’s well worth a look.

In their own words: “CALI is a U.S. 501(c)(3) non-profit consortium of law schools that researches and develops computer-mediated legal instruction and supports institutions and individuals using technology and distance learning in legal education. CALI was incorporated in 1982 and welcomes membership from law schools, paralegal programs, law firms and individuals wishing to learn more about the law.”

One of the things they are up to is eLangdell. The idea is to make the legal casebook of the future. Rather than buying a $120 casebook that comes out every four years on Evidence, say, eLangdell will let all of us collect the cases that we teach in our respective courses and rip-mix-burn our syllabi and teaching materials. His vision: these casebooks could serve a law professor and her students at a fraction of the cost of traditional casebooks and fund ongoing development of the system and the course-materials. The parallels to H20 Playlists is obvious. (One thing I wonder: why hasn’t someone set up a wiki server that lets people create syllabi for courses we teach in every high school in America?)

Not everything they do at CALI is about legal education in the strict sense. One of the ideas that he’s talking about is legal aid case management systems, an important concept for the provision of legal services to the poor.

I think some of the most interesting things he’s talking about has to do with taxonomies. Fortunately, The Man on taxonomies, David Weinberger, is right here next to me, tap-tapping away on his little ThinkPad — hopefully, for the rest of us, he is blogging away. Look to him for insights on this score, as always.

In response to questions, John says he’s very big on “legal literacy.” He points to a CALI service called Learn the Law that lets anyone get access to CALI lessons if they want to learn more on a given topic of law. He notes that in some areas, like intellectual property, we all need to know something about the law, whether we’re lawyers or not.

Interview with Urs Gasser

The Berkman communications team has been conducting a series of interviews with our fellows. The interviews are written up and posted to the Berkman website. The most recent interview is with Prof. Dr. Urs Gasser, a faculty fellow and the director of a research center at the University of St. Gallen. His center — along with a few others, like the OII, the Citizen Lab in Toronto, Dan Gillmor’s citizens’ media center — has become one of the key international partners to the Berkman Center in carrying out our mission.

An excerpt from the interview:

“Q: Have European markets taken a different approach than the U.S. towards regulating digital copyright? Is there an attempt being made to approach digital rights issues from a global perspective as opposed to a nation/market-specific point of view?

“Urs: Painted in broad brushes, it is fair to say that the U.S. and European copyright frameworks follow similar approaches as far as digital rights issues are concerned. This doesn’t come as a big surprise, since important areas such as, for instance, the legal protection of technological protection measures have been addressed at the level of international law – e.g. in the context of the WIPO Internet Treaties. However, the closer you look, the more differences among the legal systems you will find, even within Europe, where copyright laws and consumer protection laws, to name just two important areas, vary significantly if you move from – say – Germany to the U.K. as our Berkman/St. Gallen studies have demonstrated. But from the “big picture perspective” you are certainly right, there is a global trend towards convergence of digital copyright law, driven especially by TRIPS and the WIPO treaties, but also (and equally important) by bilateral free trade agreements.”

For more on Urs’ center and his colleagues, check out the Research Center for Information Law at the University of St. Gallen (I am proudly a member of its Board), as well Daniel Hausermann’s blog.

JZ's Groklaw FAQ (and law review article smoothie)

Prof. Jonathan Zittrain has responded to the enormous outpouring of Groklaw reader comments to his paper on The Generative Internet with an FAQ posted back at Groklaw.

For instance: wondering how Blackberries and other mobile devices fit into the picture of JZ’s argument about the PC lock-down future we face? Here’s an exchange that picks up on that thread. Z writes: “This is a serious challenge to my argument that after years of general purpose PC primacy, the momentum is shifting in favor of limited-use devices. The next few generations of information appliances will be telling, I agree. My sense, though, is that these devices are less products than they are services. Mess too much with an iPod, and the next iPod update (needed, of course, to work with the next gen iTunes music store, and to coordinate with one’s Nike sneakers so that one can upload running times to the iPod) will say, ‘Sorry, this iPod’s functionality has been modified, and it will no longer work with iTunes.’ Or it will simply overwrite one’s own adaptations.”

Also worth reading: a cool post from CALIopolous on JZ’s article and its Groklaw reaction, complete with a picture of a blender.

(A curious side-note: the article itself, on SSRN and in the Harvard Law Review, continues to climb the SSRN download rankings, having just broken into the top 1000 articles.)

Nick Anstead's reax to Generativity

Oxford Internet Institute SDP 2006 participant Nick Anstead has a reflective post on what he thinks JZ’s Generativity theory might mean. Nick points out some terrific problems it raises, then concludes (and I agree), “Generativity is a compelling and very attractive theory. As well as giving a compelling answer, I think it’s greatest strength is that it offers a powerful framework for asking many further questions about what exactly we desire in Internet and ICT development.”

Nicholas Carr review of JZ's Generative Internet piece

A thoughtful review/critique, plus commentary, of Prof. Jonathan Zittrain’s Harvard Law Review article, The Generative Internet, on Nicholas Carr’s blog. Mr. Carr concludes, about JZ’s conclusions: “Zittrain concludes that the best course is to ‘try to maintain the fundamental generativity of the existing grid while taking seriously the problems that fuel enemies of the Internet free-for-all. It requires charting an intermediate course to make the grid more secure — and to make some activities to which regulators object more regulable — in order to continue to enable the rapid deployment of the sort of amateur programming that has made the Internet such a stunning success.’ It’s not a question, in other words, of whether there will be limits. There will be. It’s a question of where those limits will be imposed and who will impose them.”

Carr points to the fabulous Ethan Zuckerman’s must-read review of JZ’s piece as his pointer and inspiration.

A cool example of dialogue about serious scholarship happening in public, online.