Intellectual Property Strategy: Book Launch

I’m excited to be launching a new book, Intellectual Property Strategy, tonight at Harvard Law School.  (If you’re in Cambridge, MA, USA, please feel free to come by Austin Hall East at HLS at 6:00 pm this evening for the event and a reception thereafter or tune into the webcast.)

The discussion tonight will cover two bases: first, the substance of the book and second, the format of this book, and possibly others, into the future.

On the substance of this book, I will make a few claims.  The basic claim is extremely simple: organizations should see intellectual property as a core asset class rather than as a sword and a shield, as the traditional mantra would have it.  I argue also that IP strategies should be flexible; geared toward creating freedom of action; and inclined toward openness where possible, at least in the information technology field.  These basic claims are geared both toward for-profit and non-profit firms.  There’s a chapter in the book devoted to the special case of the non-profit, which often needs an IP strategy just as much as for-profit firms do.  The flexible use of IP can support the missions of non-profits in important, distinct ways.

– The smartphone OS wars are the most obvious example of how IP matters.  It’s big business for huge firms.  The acquisition by Google of Motorola Mobility for $12.5 billion (thanks, SJ, for the typo-catching) in cash in August, 2012.  The hundreds of millions of dollars paid to Intellectual Ventures as licenses stand for another example of the growing importance in commerce of this field of law.  The multi-billion-dollar markets for the licensing of trademarks and patents in a broad range of fields is yet another.  These examples make the case for treating IP as an asset class.  And the work on IP strategy should be seen as core to the work of the organization, not something to be left only to lawyers outside the firm.

– There is a strong connection between our work in youth and media and the matter of intellectual property strategy.  We know that youth attitudes toward intellectual property are shifting rapidly over time.  The recent passage of the America Invents Act of 2011 points to the dynamism of the space.  These changes demonstrate the need for flexibility in IP strategy over time.

– The use of IP in libraries and museums is a third important case.  I’ve been working actively in the field of libraries, including service as director of the HLS Library and chairing the work to develop a Digital Public Library of America, over the past several years.  In the case of libraries, the question of how much to digitize of our collections is an important problem.  My view is that the digitization, contextualization, and free distribution of our library holdings is a way to use IP as a way to fulfill the specific mission of a non-profit that is devoted to access to knowledge.

I especially am grateful to colleagues Terry Fisher, Eric von Hippel, Lawrence Lessig, Phil Malone, Jonathan Zittrain, who will respond to the book and presentation.  Also, the book project would be nowhere near as much fun, or as good, without the partnership of June Casey, my colleague in the Harvard Law School Library, who has been nothing short of extraordinary.  And Michelle Pearse, Amar Ashar, and their teams have been wonderful in setting up this event.  It’s an amazing group of colleagues!

On the topic of the format, I am excited to talk about multiple versions of the book.  1) There is, of course, the traditional form of the book that someone can touch, pick up, and read in the ordinary way.  There’s also the digital form of that same book, which can be rendered on a Kindle or an iPad, which gives more or less the same experience.  2) There’s a form of the book that is like an Extended Play album, or a DVD that has “extras” at the end.  On the MIT Press web site, one can access video interviews and a series of case studies, for instance, which expand on the argument of the book.  See, for instance, the videos here on the MIT Press web site.

And 3), most experimentally, I have been working with a great team on a distinct version of the book that functions as an iPad application.  The idea is to embed these case studies and videos directly into the text of the main form of the book.  The iPad app version allows for many different ways through the text; connections to the open web; and loads of fun and interesting embedded links.  The idea is to rethink the format of the eBook from the ground up, to add in born-digital elements by design rather than the equivalent of putting up a PDF into an e-reader format.  It’s still in beta mode, but we will demo it tonight.

This short book is part of the MIT Press Essential Knowledge series.  It’s been fun to work with Margy Avery and her team at MIT Press on this experimental project.

Please join us if you are free!

Future of Law Libraries: The Future is Now?

A group of us is gathered today at Harvard Law School for a conversation about the future of legal information, libraries, and the law itself.  It’s a fun and diverse group — about 150 strong — in Austin Hall’s north classroom.  The wiki for the conference has the schedule, the participants, and a lot of great suggested readings in a wide range of formats.  I’m intending to live-blog here, with the usual typos and caveats and imperfections, as much of the day as I can.

Robert Berring is the opening keynote speaker.  He started with references to John William Wallace, and an article on Wallace by Femi Cadmus (now of Yale, about to go to Cornell to be the law librarian there) that appeared in GreenBag.  Berring also recalls the work of the late Morris Cohen, who was the law librarian of both Yale and Harvard.  Forty years ago, Cohen called upon the profession to step back and to reflect on where we stand.  One of the books that Berring has recently read: Keith Richards’ autobiography.  Richards cared about the quality of the music.  And from there, to Confucius: the understanding at a deeper level of an entire way of life.  We need to work toward something that we’ve been working on all along, Berring said.  Librarians have always been, and are today, the great translators of legal information.  The big change of the recent decades: the culture of the book is not the culture that we live in today.  Books, now, have to justify their existence: they make sense and work for certain purposes, but now have to prove that they are the right format.  Librarians, too, will persist: we will justify our existence, too.  What we’ve been about: providing access to legitimate, stable information to the people who need it, as the translators.  Provocative closing thoughts: the legal education field is on the verge of enormous change, and librarians will need to be there to hold people’s hands as casebooks disappear, as the format of all these bits of information change, as the profession changes.

Carl Malamud and Joe Hodnicki lead the first session.  Carl cites Robert Byrd as his primary source for law and legal information.  As Byrd did, Carl re-tells the story of the Twelve Tables, a core element of the Constitution of ancient Rome.  The key part of the story: a demand for the codification of the law.  The beginning of written law, Malamud said, stemmed from this process, and represent the true formation of the republic.  The writing-down of the law and its safekeeping, Carl says, has become the job of the people.  Law libraries risk becoming a 7-11; instead, we should be the keepers of the Twelve Tables.  Our law libraries are not active in maintaining the corpus of American legal information, Malamud says.  Why have we not scanned the 25 million pages of Supreme Court briefs?  Why do we have $0.08 per page access to legal materials and state-level copyright over law?

Joe Hodnicki responds to Carl by describing a cultural divide between the legal documentation community and the law library community.  Print is just a technical accident that we’ve lived with for several hundred years, whereas text is not.  Text is enduring, Hodnicki tells us.  He points to the duopoly of Lexis and West, with their huge corpuses of text.  Print, today, is sold at a price that will price itself out of the marketplace, Hodnicki claims.  Fastcase is different, Joe says (looking directly at CEO Ed Walters).

Richard Danner starts up the Open Access session.  He provides us an update on our collective progress on implementing the Durham Statement.  He emphasizes that most scholars would publish in a law journal even if it were not in print.  (68%; whereas 32% said that print was still important to them)  Law journal editors expressed concern about the 32% that they would fear, in a competitive environment, they would lose.  Who will drive the movement toward electronic publishing for legal scholarship, Danner asks, given that student editors are in place only for a few years?  Even if they are committed to developing an open scholarly information environment, they often only get to that perspective late in their year or so in leadership.  Deans have not been strong leaders so far, even though in the long term they (and their schools) would benefit.  The law reviews of a few stop schools (Harvard and Yale, e.g.) could tip over to open access, and that might do it — but these top journals are today still making some money from print subscriptions.  Prof. Danner ends by pointing to cross-tabs that show that those who are younger are less likely to worry about publishing in print, which may be good news for open access for law scholarship in the future.

June Liebert responds to Dick Danner’s opening about open access with a peek where are are today.  It costs law schools $25,000 to $100,000 per article (cites to Prof. Richard Neumann).  She’s got an amazing set of five practical ideas for what we can do and can control as law librarians and law faculty: 1) new library publishing paradigm; 2) build institutional repositories; 3) focus on born digital documents first; 4) stop subsidizing journals in print — buy or print only where it makes economic sense; and, 5) faculty partner in the scholarship lifecycle.

Robert Darnton — eminent scholar and teacher of history and Harvard university professor and Librarian — kicks off the last pre-lunch session with a description of the Digital Public Library of America (DPLA).  Prof. Darnton tees up and debunks a series of myths about the DPLA: it’s *not* 1) utopia; 2) intended only to serve college professors; 3) cooked up at Harvard and elitist; 4) a threat to public libraries, not a complement; and 5) an anti-Google Books Search effort.  The DPLA is rather meant as a broad-based, open process and platform that will serve public libraries, academics, and individuals alike.

Siva Vaidhyanathan of Virginia responds to Bob by describing his idea for a Human Knowledge Project.  Side note: With my DPLA hat on, I am of a mind that the DPLA is one part of the Human Knowledge Project (HKP); if we were to stitch together, at the layer of open linked data, all the national and regional efforts like Europeana, we would have built just such a project.  The dream, Siva, says, is to provide universal, comprehensive access to knowledge.  Siva says that the Human Knowledge Project is a 50-year project, whereas the DPLA is a 10-year project.  To make the HKP happen, we need to coordinate and to compete; we need interoperability and open linked data; we need to emphasize search standards within and across these systems; we need to get serious about governance; we need global copyright reform.  The HKP ideals are high and broad and important and long-term — as well as achievable, Siva argues.  Very inspiring.

For the lunchtime keynote, Michelle Wu, Georgetown’s new law library director and professor, is making the case for Building a Collaborative Digital Collection, a Necessary Evolution in Libraries (forthcoming, Law Library Journal).  She says that Section 108 and a format-shifting argument make possible her proposal for shared print and scanned resources.  Librarians are adaptive, she says, and critical of existing products that are available.  If we can do it better, we need to get off the sidelines and drive information policy.  Librarians should be fighting for copyright reform, in particualar, Wu says.

After an un-conference break, we’ve re-convened to talk about hacking the casebook.  Our great colleague Jonathan Zittrain (JZ to those in the know) is in New Hampshire on vacation (his “first in ten years” as he reports), so I play a video presentation that he precorded.  Watch it here: available online here.  JZ’s talk, as you’ll see, is about the “hack the casebook” project to reconcieve and rebuild the law school teaching casebook from the ground up.  It’s built off of the H20 project and will be the torts casebook that JZ will teach from this fall.

John Mayer, Executive Director of CALI, responds, by talking about the eLangdell project.  John recalls a 2006 speech that he gave at Nova Southeastern Law School called “rip, mix, learn” on similar topics.  Law students spend about $1,000 per year on their books.  One of the tricks associated with this project is that faculty actually don’t agree on (at least) four things: definition of a casebook; definition of a chapter; copyright issues; and quality assurance.

Kathleen Price, professor emeritus of law at the University of Florida Levin College of Law and long-time leader of the law library field, leads the final session.  Professor Price urges the law librarian community to take pleasure in the service we provide and the partnership between librarians, faculty, and students of law.  The law library profession is in fact a young profession: it goes back not even a full century, Price argues, dating back to just pre-WWII.  This first group, Price says, were the Brahmins.  Post-WWII, a new group entered the profession: outsiders who were teachers, who created teaching materials and bibliographic materials, and those who made foreign, comparative, and international law at LC something we could work with.  The group that entered the profession in the mid-1970s was also a crew of “outsiders,” including women who were excluded from the important law firms of the day (“we already have our woman…”).  This group also became very successful teachers — the generation of Bob Berring, Kathie Price herself, and others fall in this group.  Rising tenure standards have caused the law librarians since this generation to turn to scholarship of novel sorts (blogs, tweets, creation of institutional repositories) as well as fundraising and business responsibilities that are increasingly significant.  Who will replace those who are now coming up to retirement?  Three possible models: 1) faculty (or firm) services types; 2) the new technology librarians; and 3) foreign comparative and international law library specialists.  We are in a moment of flux in the field, Price says, as more and more people are interested in East Asia and African law, especially, as well as Latin American and Eastern European law.  These positions, Price notes, are all public services librarians.  We have to look to whether we can give up certain kinds of cataloging, especially if we can move metadata to the cloud and do it only once. Price concludes by asking a series of very hard questions about the future of the AALL as the primary source of continuing education for our field; the kinds of skills needed for future hires; and the kinds of teaching that make sense for law librarians.

Sarah Glassmeyer, faculty services librarian and assistant professor of law at Valparaiso University School of Law, responds to Prof. Price.  We need to work with people who are “not like us” — she cites both Carl Malamud and, well, me (a non-librarian).  Meg Kribble also gets a nice shout-out as a future law library leader.  Tom Bruce (not a lawyer or a librarian) gets a shout-out as a good mentor.  Glassmeyer worries about the generations connecting as well as they might.  Please, she says, let’s share stories across the generations — through informal mentoring, the “boomer librarians” have a lot to pass on, and the Gen X librarians need to step up (and be supported in doing so) as well.

Ron Wheeler, professor and director of the Law Library at the University of San Francisco School of Law, is the last speaker of the day.  Wheeler feels like he has one foot in two different generations.  In thinking about the future, he thought about the skills and attributes he is looking for in his new recruits.  People skills is the first thing.  It means interacting with patrons, not sitting at the reference desk.  The second is teaching innovation: more inventive, clever, interesting, and passionate about things like legal research.  The third is teamwork: not just those who tolerate teamwork, but those who thrive on teamwork and collaboration.  A fourth: people not afraid to lead.  We need to try new services and projects, and we need people who can run with them — even if they fail.  Not just managers; do-ers, too.  And networkers: those who can work with those outside their immediate network.  He wants also, to see those who are focused on sustaining a profession, not mailing it in.  Personality types: able to embrace change, those with flexibility and adaptability, people bored with the status quo.  He is eager to see those who have a passion for doing things that are non-traditional library work.  We should teach in new programs as they develop, help to solve problems for law schools and universities as they seek to innovate at the institutional level.  Technology skills — the skills that June Liebert has — in a broad range of types.  And — second to last — it’s diversity, racial and gender and lots of other kinds of diversity.  Finally: he wants people who will show up every day and work really, really hard.

Tim O'Reilly on the History and Future of Government 2.0

Tim O’Reilly is telling the Aspen Ideas Festival crowd about the history of Government 2.0. He starts it with Carl Malamud and SEC data online; next, he cites the Brits and TheyWorkForYou.com; gives Sunlight Foundation their due; and says that then-candidate Obama’s claim that we would connect people and ideas to transform government as the final breakthrough (not to mention all that great web 2.0 work in the campaign).

The hard question, as O’Reilly rightly notes, is whether these tools can work as well during times of governance as it does during the time of campaigns (or crises). To get it done, he says, we should build from a set of principles, which sound great to me:

1) We need to embrace open standards, because it leads to generative systems (with a very nice shout-out to JZ and his book, The Future of the Internet);

2) Build a simple system and then let it evolve. Twitter has 11,000 applications now build upon it — in no time (another nice shout-out to JZ here and his “hourglass architecture” slide is shown). One such simple intervention: by default, make government information open and accessible to the public.

3) Design systems for cooperation. Presume that people can work well together, even if they don’t know one another. Think of the difference between Linux development and traditional software development within a single firm. Think also of the DNS. O’Reilly also credits Cong. John Culberson, R-TX, as a leading user of social technologies in DC and someone who gets the need to set up systems that allow for cooperation (and who cites a Jefferson letter of the early 19th c. for inspiration). Make rules like: the only requirement for participation is that you participate.

4) Learn from your users, especially ones who do what you don’t expect. In the mash-ups world: 45% are built on Google Maps, with only 4% on Microsoft Virtual Earth, 3% from Yahoo! O’Reilly says the others were too slow to open up APIs; Google just went for it. One of the first hackers to do this was HousingMaps, which a hacker name Paul did using Maps (in a contravention of the Terms of Service, O’Reilly says). Did Google sue Paul? Nope. They hired him.

5) Lower barriers to experimentation. Failure has to be an option. He quotes Edison: “I didn’t fail ten thousand times. I successfully eliminated, ten thousand times, materials and combinations which wouldn’t work.” O’Reilly says that Amazon’s cloud services make this kind of rapid experimentation, iterative development, and parsing through huge data sets possible.

6) Build a culture of measurement. Systems should respond automatically to user stimuli. Real-time measurement is crucial. Throughout his talk, O’Reilly credited Vivek Kundra, the new federal CIO, as a wonderful leader in making a great deal happen already within the Obama government. As Google and Wal-mart do, the government needs to be close to a living organism, responding in real-time to extensive stimuli. We need to instrument our world to be able to respond to useful data.

7) Throw the door open to partners. Apple’s iPhone has given rise to more than 50,000 applications in less than a year. The App Store made a fine tool into a phenomenon. More than 1 billion applications have been downloaded as of April, 2009. Everyone else in the smartphone business is eating their dust, at least in the apps business. And yet, O’Reilly says, the government is still making no-bid contracts. Government has to get out of its own way. Throw it open, and let everyone compete. Apps for Democracy is much in the right direction. (He gets a big laugh when he cites a Congressman who asked: why do we need NOAA when we have Weather.com? Pretty impressive case of some people in Washington still not getting the point…)

Fundamentally, government is a vehicle for collective action. O’Reilly is right, here, too. That also happens to be what distributed digital networked technologies are good at doing — supporting collective action.

All these principles together can lead us to the Digital Commonwealth. (Hear, hear!)

Bottom line: I think O’Reilly nailed it. These are great principles and a fine time to be discussing them.  Turns out, Beth Simone Noveck, deputy CTO in the White House, and others in the Obama Administration are actually now DOING all this right now. My only amendment to the O’Reilly talk would have been a cite to Beth’s brand new book, Wiki Government (Brookings, 2009) which includes a terrific commentary on these and related themes.

Look to EthanZ‘s blog for his better-live-blogging than mine here.

Navigate 2008 Day Two Tidbits

Day 2 tidbits from Navigate ’08 by the IAPP and team: JZ told us that Mrs. Beasley, his fabulous and famous dog, has two tracking devices: a RFID chip and a GPS device. Why? They serve distinct purposes. The RFID chip is for if she gets lost and shows up at a vet’s office, in which case they can scan her and find the wayward owner (here, JZ). The GPS device gives JZ Mrs B’s whereabouts at any time. It’s turned out to be useful twice.

On the substance of the sessions, I was surprised by what amounts to another tidbit: this high-level crew of participants — including leaders from private sector, public sector, academia, and from around the world — seemed to think that greater alignment of privacy rules is desirable and possibly feasible. The consensus was not in favor of perfect “harmonization,” but rather forms of alignment that respect cultural differences, help consumers, and enable commerce to thrive. Easier said than done, to be sure, but I was surprised at the degree of consensus. The two keywords that seemed to resonate most: “alignment” and “interoperability.”

(There were specific caveats: 1) not enough public awareness and not enough pain by businesses to get this done; 2) need to scrap the bilateral approaches in a world of cloud computing; 3) enforcement challenges will abound.)

Tidbits from Navigate 2008 Day One

It’s Day One at Navigate 2008. Trevor Hughes and his crack team at the IAPP have established a space for thinking not about what’s urgeny, but about what’s important when it comes to privacy. The key for the event is to think big about privacy. The goal is to contribute to the global dialogue. (For me, kids, technology, and the future are on the brain because of Born Digital coming out, so the frame I bring to it is the future systems that we are building to protect our children and grandchildren.)

navigate08

Meta tidbit: Going meta, briefly, on the emerging art of conference blogging. I’ve been wondering: What’s the optimal amount of blogging of a conference, in terms of frequency, length, and topic? JZ says the goal should not be coverage, but to exposure worthy tidbits. That’s to say, as many as a few posts a day if there are worthy things to say, or no posts if the conference totally stinks. (JZ is blogging a key aspect of Hal Abelson’s provocation so we can see what he means by a “tidbit” when that’s up.)

Process/experimentation tidbit: there are three breakout groups, each using MindManager in the breakout rooms. From a mission-control, a few of us have a view across the three MindMaps through a networking tool called MindJet. It works great for viewing all the conversations as they emerge in real-time. It also lets one intervene from the center — but that is not necessarily welcome, it seems, as the MindManager scribes have enough to do to keep track of the conversation, and chatting with the curators doesn’t seem to help their focus much. It’s cool to be able to intervene and to ask clarifying questions, but not necessarily productive to the whole, it seems. It’s great to be trying this out in real-time, though.

Substantive tidbit: from the first session, part of MIT prof Hal Abelson’s provocation. In the end, the way to go is to build accountable information systems, says Hal. He cited a letter he (and many of us) got from Bank of America which said that data about some customers had escaped from a third-party location and that B of A is tracking our accounts to see if anything is going wrong as a result. Hal says that this may be lawful, but it’s not accountable. He wants to know more: who had the data, why they had it, what it was, what happened in the breach, what risks he is running as a result, and so forth. He also says not to worry so much about the collection or mining of the data, but rather about decisions made about you based on these data. (I have a sense already that this is not a consensus view among other attendees — to be tested out!)

A final Bostonian’s tidbit, off to the side: In the command-central room for IAPP, there’s a side conversation about the MBTA’s Charlie Tickets v. Charlie Cards. These are the cards you buy to go on the Boston-area subway system. If you use an Charlie Ticket, rather than a Charlie Card, you pay more per ride, but there’s little chance your movements could be tracked, so one way to see it is that there’s a explicit premium per ride for your privacy. Richard Stallman has an alternate approach, apparently: swapping zero-value CharlieCards to frustrate any user tracking while not having to pay the privacy premium.

Navigating Privacy

Jonathan Zittrain and I are headed up to seacoast New Hampshire to be the “curators” of the IAPP’s new executive forum, Navigate, for the first few days of the week. It’s a beautifully organized program and a terrific line-up. It promises to be provocative and a lot of fun.

Privacy turned out to be a major part of our research into how young people use new technologies differently from their parents and grandparents. In our book, Born Digital (coming out in the next few weeks; and now the book’s website from the publisher is up), we started with a single chapter on Privacy and ended up with three: Identity, Dossiers, and Privacy. (Berkman summer intern Kanu Tewari made a video rendition of our Dossiers chapter; and the project’s wiki has a section on Privacy.) I look forward to testing those ideas with a bunch of privacy pros who will no doubt help to refine them.

As a special bonus: They’ve partnered with the MindJet people — makers of MindManager, which I love — to document the event and to extract key themes in an organized digital format. I’m looking forward to learning some MindManager tricks.

Daniel Solove's The Future of Reputation

The first book I’ve read in full on my Amazon Kindle is Daniel Solove‘s “The Future of Reputation: Gossip, Rumor, and Privacy on the Internet.” It’s a book I’ve been meaning to read since it came out; it did not disappoint. I was glad to have the joint experience of reading a first full book on the Kindle and of enjoying Solove’s fine work in the process.

Before I picked up “The Future of Reputation,” Solove had already played an important part in my own thinking about online privacy. The term that he coined in a previous book, “digital dossiers,” is a key building-block for the chapter of the same topic in Born Digital, which Urs Gasser and I have just finished (coming out in August). Solove advanced the ball in a helpful way, building on and refining previous scholarship of his own and that of Jonathan Zittrain, Paul Schwartz, Simson Garfinkel and others.

This book has the great virtue of being accessible to a reader who is not a privacy expert as well as being informative to those who know a good bit about it to begin with. Solove repeats a lot of lines that one has heard many times before (for instance, at the outset of Chapter 5, Scott McNealy’s line: “You already have zero privacy. Get over it.”), but also introduces some new ideas to the mix. It’s good on the theory, but it also offers practical policy guidance. He also poses good questions that could help anyone who wants to think more seriously about how to manage their reputation in a digital age.

One other thing I appreciated in particular: Solove is clearly a voracious reader and does an excellent job of situating his own thoughts in within the works and thought of others (variously Henry James and Beecher; Burr and Hamilton; Warren and Brandeis; Brin, Johnson & Post, and Gates) and in historical context, which I much enjoyed.

As for the Kindle itself: it’s fine. I don’t love it, but I also have found myself bringing it on planes with me lately, loaded up with a bunch of books that I’ve been meaning to read. So far, the battery life has been poor (might be my poor re-charging practices), so that the technology of the Kindle is sometimes less good than the technology of the classic book (which cannot run out of batteries in the middle of a long-haul flight, as my Kindle always seems to). The eInk is soft on the eyes; no problem there. The next and previous page functionality is fine, and the bookmark works pretty well. And FWIW, I’ve now got Mark Bauerlein’s “The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future (Or, Don’t Trust Anyone Under 30)” on there, which is up next for a review — as its premise cuts against the grain of Born Digital.  One advantage of the Kindle is cost, once you have device: the Solove and Bauerlein books cost a mere $9.99 each.

OpenLibrary.org

There’s enormous promise in the Open Library project, which we’re hearing about today at Berkman’s lunch event from Aaron Swartz. The idea is wonderfully simple: to create a single web page per book. That web page can aggregate lots of data and metadata about each book. In turn, the database can be structured to indicate very interesting relationships between books, ideas, and people. The public presentation of the information is via a structured wiki.

I’m most interested in hearing what Open Library thinks it needs in the way of help. They have a cool demo here. It seems to me that one way to succeed in this project is to combine what start-ups call “business development” with what scholars do for a living with what non-profits think of as crowd-sourcing or encouraging user generated content or whatever. There’s a lot that could be done if the publishers and libraries contributed the core data (should be in everyone’s interest, long-term anyway); scholars need to opt in an do their part in an open way; Open Library needs to get the data structured and rendered right (curious as to whether OPML or other syndicated data structures are in play, or could be in play, here); and human beings need to contribute, contribute, contribute as they have to Wikipedia and other web 2.0 megasites.

A note from a participant: “libraries resist user-generated cataloguing.” This seems to me a cultural issue that is worth exploring. We do need to balance the authority of librarians in with what the crowds have to offer. But I’m pretty sure it’s not an either-or choice, as David Weinberger makes clear through his work.

One thing that makes a lot of sense is their plan for supporting the site over time. The combination of philanthropy (at least as start-up funds, if not for special projects over time) plus revenue generated through affiliate links over time makes a lot of sense as a sustainable business plan.

One could also see linkages between Open Library and 1) our H2O Playlists initiative (hat-tip to JZ) to allow people to share their reading lists as well as 2) what Gene Koo and John Mayer at CALI are doing with the eLangdell project.

It’s not a surprise that the truly wonderful David Weinberger — I can see him blogging this in front of me — brought Aaron here today to talk about this.

Where I’m left, at the end of lunch, is with a sense of wonder about what we (broadly, collectively) can accomplish with these technologies, a bit of leadership, a bit of capital, good communications strategies, and some good luck in the public interest over time. It’s awe-inspiring.

How Long Will Scrabulous Last in Facebook?

I am curious to see how long Facebook leaves this app up after this WSJ article. Scrabulous, a Facebook app made by third-party developers, is an obvious knock-off of Scrabble. One might reasonably raise copyright and trademark issues related to it (perhaps the Scrabulous developers could withstand these complaints; query as to Facebook’s willingness to put itself in harm’s way, though, as potentially secondarily liable). Coming our way in the near-future, a new form of Web 2.0-fired dispute: there are very interesting issues brewing related to Facebook’s role as a platform for other applications and its policing function. Interoperability is a great thing, and Facebook has done well to open up its API. But when a controversy strikes over an app that is framed in Facebook, on which developers and investors have invested time and capital, and into which people have mixed their personal information, who decides whether the app stays or goes? The judge and jury are likely to be Facebook employees, at least in the first instance. Jonathan Zittrain has been teaching about this issue of Private Sheriffs for a long time, with more on this topic coming in his forthcoming book, The Future of the Internet — and How to Stop It.

Berkman Books

The faculty and fellows of the Berkman Center will publish four books this year. Two of them are out already: David Weinberger’s Everything is Miscellaneous and John Clippinger’s A Crowd of One. In celebration of this high-water mark for the team, we’ve put together a new page on the Berkman web site called Berkman Books, which features most of the relevant books written by Berkman faculty and fellows since our founding nearly 10 years ago. We’ll keep it updated as new ones come online, such as the ONI‘s Access Denied (on Internet filtering) and Prof. Jonathan Zittrain’s The Future of the Internet — and How to Stop It, both due out later this year.