Google in China

I’m looking forward to a day of watching the fallout from the Google-China-HK announcement yesterday. I give Google an enormous amount of credit for the approach that they are taking; it’s a worthy effort to meet what they consider their human rights obligations while seeking to engage in the China market, both of which are laudable. I’ll be surprised, though, if the Chinese government doesn’t decide fairly promptly to block the redirects from Google.cn to the uncensored Hong Kong site, though.  This chess-game also demonstrates the importance of (and challenges inherent in) the work of the Global Network Initiative, of which Google is a member, along with Microsoft and Yahoo!

(For more info: See generally the OpenNet Initiative site, blog, research papers, and so forth online.  There’s also a chapter on this issue, written by our colleague Colin Maclay, in the forthcoming OpenNet Initiative book called Access Controlled, due out within the month from MIT Press, as there is in our previous book, Access Denied, available online.  Here’s a piece in which I make a cameo on CNN on Google and China, one of many video-clips on this topic.  And Rebecca MacKinnon’s blog is always informative on these topics.)

Allison Hoover Barlett, The Man Who Loved Books Too Much

For Christmas, my good friend and mentor John DeVillars gave me a copy of “The Man Who Loved Books Too Much” by Allison Hoover Bartlett.  (There were several messages embedded in the giving of this gift, I’m clear on that much.)  I’ve been eager to read it, but it was fairly far down on the stack of books on my bedside table until last night.  It was worth the wait: a lot of fun and readable in a few nights, if you’re willing to stay up late.  It’s apparently non-fiction, but it reads almost like a mystery novel — about Bibliomania.

Bartlett tells the story of John Charles Gilkey, who steals a great many rare books, and the rare book dealer (Ken Sanders) who helps to track him down and warn his fellow dealers of Gilkey’s misdeeds.  Bartlett clearly spent an enormous amount of time reading about book collectors, dealers, and thieves and talked to a good many of them, too.  She tells the story of Gilkey, Sanders et al. in a manner that’s at once serious and reflective, and with a welcome sense of humor throughout.  Bartlett gets deeply into the topic herself through the research and writing process, which comes through clearly in the text in an appealing, human way.  She refers in the notes on p. 263 to a state of “research rapture,” which resonated for me.  For anyone who loves books and bookstores (or libraries, for that matter, which make a cameo appearance near the end, especially), it’s an interesting, fun (and quick) read.

For those for whom the book is not enough on this topic: I also enjoyed the Library Thing interview with the author.

Joel Reidenberg: Transparent Citizens and the Rule of Law

Prof. Joel Reidenberg (Fordham Law; director of the Center on Law and Information Policy) starts out a luncheon talk at the Berkman Center’s Law Lab with a provocative opening theme: Transparency challenges the very existence of the Rule of Law. Some hasty/live-blogged notes follow:

As a practical matter, in the cloud era, we’ve lost the practical obscurity of information about all of us.  What used to exist about us, but in private/not-that-accessible form, is now accessible and associate-able with an individual.  We now have transparent citizens, Reidenberg contends.

How does this challenge the rule of law, he wonders?  The data that are included in the TIA and other state databases come from third-parties, outside the warrant process (the third-party data problem).  The state doesn’t have to spend the same amount of time or money to gather a great deal of information about each of us.  Fusion centers are another prime example of this phenomenon, Reidenberg argues.  Fusion centers use data from private sector parties to determine who should be a suspect, as opposed to the historical approaches to determining suspects and then gathering data.  The state does not have to adhere as faithfully to the rule of law in their law enforcement practices.

We have a transparency challenge, says Reidenberg.  Enhanced cryptography can allow people to carry out acts anonymously, he points out; ditto for the Cohen case in New York with Blogger, Juicy Campus, and so forth.  People are hiding behind anonymity to carry out wrong-doing.  As the public perceives more and more surveillance, wrong-doers will use more robust tools to maintain anonymity — making it harder for the state to catch the real bad guys and to protect the rule of law among the citizenry broadly.

There’s a transparency challenge to the rule of law, as well, Reidenberg argues.  The dossier on Justice Scalia that Prof. Reidenberg’s class pulled together.  Secondary use is a major issue when it comes to public data.  Students could easily pull together a dossier on a major figure by using the transparency that government insists on with respect to information about each of us.  A related example: social networking and judges, in the case of a Staten Island-based judge who is friends with those who appear before him.  (Is there a difference between LinkedIn and Facebook?  And/or: do we really want our judges “unplugged” if we tell them they cannot be friends with anyone online?  What about the jury pool and public friendship networks?  Lawyers googling potential jurors outside of voir dire?  Puts me in mind of Prof. Charles Nesson’s American jury seminar this semester at HLS.)

Reidenberg concludes with the “re-instantiation of the Rule of Law.”  We need to focus on a norm of data misuse, he argues.  Knowledge for some purposes is fine; knowledge for other purposes is not OK.  Reidenberg’s argument here points toward seeking to re-engineer practical obscurity into the technical network.  He cites to Helen Nissenbaum’s contextual integrity argument as support for this concept.  (It’s much in the spirit of our work on the Youth Media Policy project, where we’re trying to translate the data about youth online digital media practices into good policy proposals.)

This talk by Reidenberg proves to be extremely provocative to the Law Lab crowd assembled here.  A spirited discussion starts up during the question period.  Just as a few examples of types of push-back: John Clippinger, the law lab’s co-director, says that he agrees with Reidenberg’s analysis but disagrees in terms of what to do about it.  It’s the wrong time to prescribe solutions right now, Clippinger charges, especially with norms in flux as they are right now.  Julie Cohen (Georgetown law prof who is a visiting professor here at HLS this year), who spoke here in the Berkman Center lunch series just last week, was talking about the virtues of “semantic discontinuity” in response to similar privacy concerns.  The communication process leads to a much finer granularity of information as well as new forms of metadata creation and re-assembly, which in turn makes it difficult to operate in proper contexts, argues Urs Gasser (in a quite wonderful series of questions).  Joel’s limited purpose knowledge regime, he argues, is up against a loss of the rule of law (though Clippinger thinks you don’t have to frame it that way; and Cohen pushes on what he means by the “rule of law”; and Clippinger comes back to the private law mesh of contracts-type of regime as preferable).  Professor Harry Lewis (SEAS at Harvard) wants to know how all this will affect the extensive private surveillance regime and whether law should come into the picture to restrict the use of these privately-collected data.  (My question: would you close the third-party data loophole with respect to state access to privately-collected data without 4th Amendment protections?  Yes, said Reidenberg.)

Just based on the last few weeks of lunches around the Berkman Center, I’m coming up in my mind with a dream seminar on these topics.  For starters, I’d have Joel Reidenberg, Julie Cohen, and Jonathan Zitttrain; present each of them with a common set of hard Internet law problems; and ask them to apply their big-picture theories to their resolution.  I suspect we’d get some extremely interesting, and different approaches, to adjusting the law, technology and norms to fit better with the digital age.  I can imagine there are others to invite to the party, too…

Julie Cohen: Configuring the Networked Self

At the Berkman Center, we are hearing a preview of key elements of Prof. Julie Cohen‘s forthcoming book, Configuring the Networked Self.   Some hasty live-blog notes follow:

Prof. Cohen tells us that there are two disconnects that she starts with: 1) there are lots of invocations of “freedom” being floated around, but many of the results in the political and technical processes seem antithetical to the interests of the communities involved; and, 2) while the free culture debate is all about openness, it’s impossible (or at least difficult) to imagine how privacy claims may be contemplated in the context of all this openness.

What’s puzzling, to Cohen, about these disconnects that has led her to major substantive and methodological claims: we make these laws and policies about freedom within the frame of liberal political theory, invoking terms like autonomy and freedom and presumptions like rational choice as the dominant terms of the discourse.  We ought to be focusing instead on the experienced geography of the networked society, where people are living in cultures, living in ways that are mediated by technologies.  We don’t have very good tools to ask and answer those questions.  We’re led to start with the presumption that individuals are autonomous and separate from culture.  It’s difficult to say things about how more or less privacy will result in meaningful, significant consequences for how we experience our culture and how political discourse works from there.

On to the methodological question: lots of people are working on these questions in related fields, and we in legal scholarship often don’t pay enough attention to what they are learning (say, in cultural theory, STS, other fields described by legal scholars in pejorative terms of “post-modernist” and otherwise).  We need to understand what Cohen calls “situated embodied users” and how they experience information technologies in order to inform law and policy in this field better.  Cohen’s “normative prior”: We should promote law and policy that promote human flourishing (network neutrality, access to knowledge, access to culture as precursors for participation in public life).  But Cohen also tells us that she parts company with those who expound this theory where they seek to embed it in liberal political theory.  We should reconcile — or live with — tensions in legal and policy problems by looking to these “post-modern” fields and ask what they can tell us.  We should ask what kinds of guarantees the law ought to provide.

Where does this process lead us?  Think about Access to Knowledge, Cohen says: it’s nice, but it doesn’t get you as far as you need for human flourishing.  It doesn’t guarantee you rights of re-use in creative materials or rights of privacy, for instance.  There are further structural preconditions for human flourishing that we need to ensure.  Two in particular: 1) operational transparency: it’s not enough to know what is being collected about you, you need to know how it’s going to be used; and 2) semantic discontinuity: a vital structural element of the networked information economy: e.g., copyright, you need incompleteness in the law and policy regime that affords room for play.  In privacy, you need space left over for identity play, for engagement in unpredictable activity.  In architecture, seamless interoperability is all to the good in some ways, but not good for privacy, for instance.  Data about you would therefore move around and around and around without your knowing about it.  Human beings benefit, Cohen argues, from structural discontinuity.

This is going to be a fascinating and important book.  And I’m eager to think through how Cohen’s claims relate to JZ’s in Future of the Internet once I’ve read Cohen’s new work.

Reader Privacy Event at UNC-Chapel Hill

Anne Klinefelter, the beloved law library director at UNC-Chapel Hill (you should hear her dean introduce her; really!), is hosting a Data Privacy Day event on reader privacy.  She makes the case in her opening panel remarks that, if we wish to translate library practices with respect to privacy into a digital world, we need to figure out how to translate not just law but also ethics.  Anne argues that the law needs updating to keep up with new research practices of today’s library users, especially as we shift from a world (primarily) of checking out books to a world (primarily) of accessing databases.  Her analysis of the 48 state laws with respect to user data privacy shows that the statutes vary in substance, in coverage, and in enforcement.  Anne’s closing point is a great one: if we’re in the business of translating these rules of library protection of user data, we need to bring the ethical code and norms along as well.

Jane Horvath (Google) and Andrew McDiarmid (CDT) take up the Google Books Search Settlement and its privacy implications.  Jane emphasized the protections for user privacy built into book search.  She also emphasized ECPA and the need to update it to protect reader privacy.  Google, she says, is “calling for ECPA reform.  It really is necessary now.”

Andrew described, diplomatically and clearly, the privacy concerns that CDT has with respect to the Google Books Search Settlement (which CDT thinks should be approved; EFF, the Samuelson Clinic, and the ACLU of Northern California have similar concerns, but oppose approval of the settlement).  The critiques that Andrew described are not limited to Google’s activities, he noted; Amazon and others need to address the same issues.  Andrew worries about the potential development of (too?) rich user profiles that may be the target of information requests for law enforcement and civil litigants.  Rather than regulate Google as a library, Andrew argues, we should focus on the kinds of safeguards that CDT would like to see apply.  The best recent restatement of Fair Information Practices is by the DHS, says Andrew.  Eight principles should apply: Transparency, individual participation (including the right to correct it), purpose specification, minimization, use limitation,  data quality and integrity, security, accountability and auditing.  CDT would like to see Google commit to specific protections in alignment with these eight principles.

Sahara Byrne: Parents, Kids and Online Safety

Prof. Sahara Byrne, of the communications department at Cornell, is the Berkman Center‘s lunch series speaker today.  Prof. Byrne studies responses to Internet safety techniques.  She’s interested in the “recipes for disaster,” such as when parents love a given safety technique and kids hate it.  She’s a believer in psychological reactance theory: that when kids really don’t like something, they’re going to work hard to get around it.

Her methods: an extensive Internet survey of 456 parents, with matched child pairs (10 – 17 years old).  Asked parents how much they would support a particular tool and kids how they would feel if their parent adopted this strategy.  Parents were asked more questions than the kids.

This is a fascinating and important study.  Her data are brand new and she’s still working through them and their implications.  The outcome of her study is especially of interest to some of us here at the Berkman Center because Prof. Byrne developed her survey in large part based on the public meeting’s output from the Internet Safety Technical Task Force and the Task Force’s final report.

A few of her findings from the matched pairs:

– Surveillance of kids’ online behavior by the technology/service provider is popular by parents and particularly disliked by kids.

– User-child empowerment strategies were popular with both parents and kids.

– Also, equally popular among parents and kids: when kids who were bad or mean were suspended from school.

Some of the important predictors of whether there will be disagreement or not with respect to a given matched pair:

– Parenting style can predict a great deal of agreement/disagreement between parents and kids.  Households with good communications between parents and kids are likely to have the greatest level of agreement.  Authoritative styles of parents — where there’s a mix of good communications but also clear parenting decision-making — there’s still likely to be a challenge for parents in terms of deploying some of these technologies to help kids stay safe.

– Values and religion were important variables.

– Boys tend to disagree with their parents more than girls.

If one buys the psychological reactance theory, the types of approaches that are most likely to work:

– Empowering children to protect themselves

– Giving the government and industry some responsibility for protecting kids in terms of protecting them from harmful information

What’s most risky in terms of strategies that may lead to the highest degree of disagreement:

– Co-viewing of information

– Parental access to what kids were looking at (tracking)

Parents are not that aware of what their kids are actually do (Prof. Byrne showed statistically relevant differences in several cases).

During the discussion phase, we learned about a promising cyber-safety approach underway at the Boston Public Schools, with funding from Microsoft.  It’s a student-run program called “Cyber Safety Heroes” (the previous name ran into an IP dispute with a well-known content company…).  I look forward to following it closely.

And stay tuned for the final, published version of this very helpful research!

Research Confidential and Surveying Bloggers

In our research methods seminar this evening at the Berkman Center, we got into a spirited conversation about the challenges of surveying bloggers.  In this seminar, we’ve been working primarily from a text called Research Confidential, edited by Eszter Hargittai (who happens to be my co-teacher in this experimental class, taught concurrently, and by video-conference, between Northwestern and Harvard). The book is a great jumping-off point for conversations about problems in research methods.

The two chapters we’ve read for this week were both excellent: Gina Walejko’s “Online Survey: Instant Publication, Instant Mistake, All of the Above” and Dmitri Williams and Li Xiong’s “Herding Cats Online: Real Studies of Virtual Communities.”  Both chapters are compelling (as are the others that we’ve read for this course).  They tell useful stories about specific research projects that the authors conducted related to populations active online.  In support of our discussion about surveys in class, these two chapters tee up many of the issues that we ought to have raised in this conversation.  Gina also came to class to discuss her chapter with us, which was amazing.  (Come to think of it, I would also have liked to have met the two authors of the second chapter; they wrote some truly funny lines into the otherwise very serious text.)

In a previous class, we started with Eszter’s Introductory chapter, “Doing Empirical Social Science Research,” as well as Christian Sandvig’s “How Technical is Technology Research? Acquiring and Deploying Technical Knowledge in Social Research Projects.”  These two chapters were a terrific way to start the course; I’d recommend the pairing of the two as a possible starting point for getting into the book, even though they’re not presented in that order (with no disrespect meant for those who chose the chapter order in the book itself!).

While many of Research Confidential’s chapters bear on the special problems prompted by use of the Internet and the special opportunities that Internet-related methods present, the book strikes me as very useful read for anyone conducting research in today’s world.  I strongly recommend it.  The mode of the book renders the text very accessible and readable: unlike most methods textbooks, this book is a series of narratives by young researchers about their experiences in approaching research problems, some of them related to the Internet and others not so technical in nature.  As a researcher, I learned a great deal; as a reader, I thoroughly enjoyed the book’s stories.

Harvard Library Report

Over the past nine months or so, a group of us have worked on a Harvard-wide Task Force to consider our library systems.  The report is being issued today by Harvard’s Provost, Steven E. Hyman, who chaired our Task Force.  Over the next year-plus, we will be working to implement changes in five key areas of the Harvard University library system.

Harvard is fortunate to have one of the great library systems in the world as a crown jewel.  The library system plays a central role in the intellectual life of our community, both as physical spaces and as resources of teaching and scholarship.  The 1200 or so library staff at Harvard, as I’ve come to learn, are simply extraordinary in terms of breadth and depth of talent.   But we can do more with what we have, and we can better position ourselves for the future — a future that will be “digital-plus” — than we are today.

As Provost Hyman wrote about the report:

“The report of the Task Force on University Libraries is a very thoughtful document about an extraordinary system. But it is also a stark rendering of a structure in need of reform. Our collections are superlative, and our knowledgeable library staff are central to the success of the University’s mission. The way the system operates, however, is placing terrible strain on the libraries and the people who work within them.

“Over time, a lack of coordination has led to a fragmented collection of collections that is not optimally positioned to respond to the 21st century information needs of faculty and students. The libraries’ organizational chart is truly labyrinthine in its complexity, and in practice this complexity impedes effective collective decision-making.

“Widely varying information technology systems present barriers to communication among libraries and stymie collaboration with institutions beyond our campus gates. Our funding mechanisms have created incentives to collect or subscribe in ways that diminish the vitality of the overall collection.

“Libraries the world over are undergoing a challenging transition into the digital age, and Harvard’s libraries are no exception. The Task Force report points us toward a future in which our libraries must be able to work together far more effectively than is the case today as well as to collaborate with other great libraries to maximize access to the materials needed by our scholars.”

I am excited to work with members of the Harvard library community and many others — inside and outside the community — to build on the promise of this report and the Harvard library system.

Dawn Nunziato's Virtual Freedom: Net Neutrality and Free Speech in the Internet Age

Dawn Nunziato, a law prof at George Washington University Law School, has written a helpful and interesting new book, entitled Virtual Freedom: Net Neutrality and Free Speech in the Internet Age.

Her focus in “Virtual Freedom” is — as the subtitle suggests — free speech on the net, framed primarily for the current net neutrality debate.  She compares two distinct conceptions of the First Amendment, one affirmative and the other negative.  She argues forcefully for the affirmative approach to the First Amendment.  In making out her argument, she recalls John Stuart Mill and Oliver Wendell Holmes (on the marketplace of ideas conception), through to Cass Sunstein (whose views get a great deal of airtime in the book) and Owen Fiss, among others.  Along the way, she takes up, fairly extensively, the core relevant doctrines: the state action doctrine, the public forum doctrine, the fairness doctrine, must carry, and common carriage.  She also spends a good deal of time in the caselaw, carefully reviewing also the matters one might expect to see, many of which predate today’s Internet: Marsh v. Alabama, Pruneyard, and other state action doctrine/shopping mall-type cases; the AP decision of 1945; Red Lion; Turner; Brand X; Carlin; AT&T v. the City of Portland; and so forth.  She takes up several Internet-specific matters as well (such as Intel v. Hamidi, CDT v. Pappert, and the ICANN debates) and sets them in context.

Her bottom line is that Congress should pass a law (or require the FCC) to prohibit broadband providers from blocking legal content or applications and from engaging in various forms of discrimination and prioritization of packets.  She argues, too, in favor of greater transparency by broadband providers when they do engage in selective passage of packets.  She says maybe we should regulate powerful search engines, such as Google, too.

Nunziato’s book made me think of two other books I’ve re-read in the past few weeks.  The first is Newton Minow and Craig LaMay’s Abandoned in the Vast Wasteland: Children, Television, and the First Amendment (1996), which takes up similar issues related to various conceptions of the First Amendment, though from the angle of protecting and supporting children.  The other is Jonathan Zittrain’s free-for-the downloading Future of the Internet — and How to Stop it (2008), especially in chapters 7 through 9, in which JZ takes up many of the same issues (changes in the public/private online and how we should think about “regulation” of online behaviors).

I enjoyed this book: it’s well-written and, just as important, I think Nunziato is, by and large, right as to her normative view.  Virtual Freedom: Net Neutrality and Free Speech in the Internet Age belongs on the bookshelf (virtual or otherwise!) of anyone working on broadband regulation, net neutrality, online censorship, and the like.