Making a Market Emerge out of Digital Copyright Uncertainty

The digital copyright issue is one of the sidebars related to the Google/YouTube transaction that has merited a fair amount of digital ink.

(For a few examples: don’t miss Fred von Lohmann as interviewed by John Battelle. Declan McCullagh and Anne Broache have an extensive piece highlighting the continuing uncertainty in the digital copyright space and quoting experts like Jessica Litman. Steve Ballmer brings it up in his BusinessWeek interview on the deal, asking, “And what about the rights holders?” And the enormously clever Daniel Hausermann has an amusing take on his new blog.)

My view (in large measure reflected in the WSJ here, in a discussion with Prof. Stan Liebowitz) is that Google is taking on some, but not all that much, copyright risk in its acquisition of YouTube. Google has already proven its mettle in terms of offering services that bring with them a reasonably high appetite for copyright risk: witness the lawsuits filed by the likes of the publishing industry at large; the pornographer Perfect 10; and Agence France Presse. There’s no doubt that Google will have to respond to challenges on both secondary copyright liability and direct copyright liability as a result of this acquisition. If they are diligent and follow the advice of their (truly) brilliant legal team, I think Google should be able to withstand these challenges as a matter of law.

The issue that pops back out the other side of this flurry of interest in the broader question of the continued uncertainty with respect to digital copyright. Despite what I happen to consider a reasonably good case in Google’s favor on these particular facts (so far as I know them), there is an extraordinary amount of uncertainty as a general matter on digital copyright issues in general. Mark Cuban’s couple of posts on this topic are particularly worth reading; there are dozens of others.

Many business models in the Web 2.0 industry in particular hinge on the outcome of this uncertainty. A VC has long written about “the rights issues” at the core of many businesses that are built, or will be built, on what may be the sand — or what may turn out to be a sound foundation — of “micro-chunked” content. Lawrence Lessig has written the most definitive work on this topic, especially in the form of his book, Free Culture. The RSS-and-copyright debate is one additional angle on this topic. Creative Commons licenses can help to clarify the rights associated with micro-chunked works embedded in, or syndicated via, RSS feeds.

Part of the answer could come from the courts and the legislatures of the world. But I’m not holding my breath. A large number of lawsuits in the music and movies context has left us clearer in terms of our understanding of the rules around file-sharing, but not enough clarity such that the next generation of issues (including those to which YouTube and other web 2.0 applications give rise) is well-sorted.

Another part of the answer to this digital copyright issue might be provided by the market. One might imagine a process by which citizens who create user-generated content (think of a single YouTube video file or a syndicated vlog series, a podcast audio file or series of podcasts, a single online essay or a syndicated blog, a photo covering the perfectly captures a breaking news story or a series of evocative images, and so forth) might consistently adopt a default license (one of the CC licenses, or an “interoperable” license that enables another form of commercial distribution; I am persuaded that as much interoperability of licenses as possible is essential here) for all content that they create, with the ability also to adopt a separate license for an individual work that they may create in the future.

In addition to choosing this license (or these licenses) for their work, these users registered this work or these works, with licenses attached, in a central repository. Those who wished to reproduce these works would be on notice to check this repository, ideally through a very simple interface (possibly “machine-readable” as well as “human-readable” and “lawyer-readable,” to use the CC language), to determine the terms on which the creator is willing to enable the work to be reproduced (though not affecting in any way the fair use, implied license, or other grounds via which the works might otherwise be reproduced).

Some benefits of such a system:

– It would not affect the existing rights of copyright holders (or the public, for that matter, on the other side of the copyright bargain), but rather ride on top of that system (which might have the ancillary benefit of eventually permitting a global market to emerge, if licenses can be transposed effectively);

– It would allow those who wish to clarify the terms on which they are willing to have their works reproduced to do so in a default manner (i.e., “unless I say otherwise, it’s BY-SA”) but also to carve out some specific works for separate treatment (i.e., “… but for this picture, I am retaining all rights”);

– It might provide a mechanism, supplemental to CC licenses, for handshakes to take place online without lawyers involved;

– It might be coupled with a marketplace for automated licensing — and possibly clearance services — from creators to those who wish to reproduce the works;

– It could be adopted on top of (and in a complementary manner with respect to) other systems, not just the copyright system at large as well as worthy services/aggregators of web 2.0 content, ranging from YouTube, software providers like SixApart, FeedBurner, Federated Media, Brad Feld’s posse of VCs, and so forth; and,

– It would represent a community-oriented creation of a market, which ultimately could support the development of a global market for both sharing and selling of user-generated content.

This system would not have much bearing on the Google/YouTube situation, but it might serve a key role in the development of web 2.0, or of user-generated content in general, and to help avoid a copyright trainwreck.

Sony's new web e-book reader to support (some) RSS feeds

Sony is unveiling its new online bookstore and e-book reader on October 1. One interesting twist is that they will allow you also to subscribe to certain RSS feeds on the device. But only a small handful of feeds, reports Stuff. The article says:

“The device and service will also let users download from the Really Simple Syndication or RSS Feeds of popular blogs, including Salon, Slate, Huffington Post, engadget and Gizmodo to read on the device. But it will only downloads from approved feeds, restricting users from freely downloading from any RSS feed.

“‘We’ll be expanding and improving it beyond that,’ he added.

“Newspapers and other periodicals will not be offered at first, although Hawkins did not rule out such features down the line.

“‘We’re taking a serious look at it,’ he said. ‘But we’re focusing on books and personal content at launch.'”

No doubt the copyright regime, and the lack of clarity around licensing of RSS feeds, has something to do with this approach by Sony.

Knowledge@Wharton on social networking sites

I’m not sure it’s all right, but a provocative piece about Facebook & co. at the excellent Knowledge@Wharton site, with lots of quotes from Kevin Werbach, who usually is right. The implication is that they will become the victims of their own success, expand too far, and the digital natives will leave them for the Next Hot Thing.

The short study says: “Underneath Facebook’s expansion plans is a conundrum facing any social networking site: How do these companies expand into new markets without losing what originally made the site popular and alienating their existing customers? For instance, if a site starts out as a trendy online hangout for young people and then begins courting senior citizens, it is unlikely its initial customer base will stick around, say experts at Wharton.

“Couple that dilemma with the fact that social sites’ business models are already fragile, and a loss of focus could be fatal.”

Reuters, NewAssignment.Net team up

Chris Ahearn at Reuters has made another sage investment in a non-profit, this time to Jay Rosen’s NewAssignment.Net. Chris is the visionary president of Reuters Media. He is the key driver, along with his colleague Dean Wright, behind Reuters’ partnership with Global Voices.

Chris writes, at Huffington Post:

“While the Internet is rapidly transforming the world of traditional media, it also presents amazing new possibilities in terms of strengthening the investigative arm of journalism. The Internet is the perfect vehicle for galvanizing the public to become more involved in reporting.

“Earlier this year, Reuters made a contribution to the Berkman Center for Internet and Society at the Harvard Law School in support of Global Voices Online * the largest and most successful international bridge for bloggers. Global Voices Online is a select guide to conversations, information and ideas appearing on various forms of participatory citizen media such as blogs, podcasts, photo-sharing sites and videoblogs.

“While encouraging good journalistic ideas is a worthy goal in itself, Reuters believes that supporting new and varied networks of creators with different perspectives is good for both journalism and business.

“Ultimately, journalism is about the story and the pursuit of truth; it is not about the news industry, a j-school or a traditional newsroom structure. By building bridges and finding new ways to augment and accelerate the creation of quality journalism, we believe that ultimately the public will benefit and perhaps change their minds about the noble profession of journalism.”

Richard Sambrook of the BBC comments approvingly on the move here.

Congratulations, Global Voices Community

As Rebecca MacKinnon reports, Global Voices today won the Knight-Batten Award for innovation in journalism. It’s quite an accomplishment, for which literally hundreds of people can take credit. GV has been a runaway success since RMacK and Ethan Zuckerman kicked it off not so very long ago. I’m so happy for everyone whose hard work has made this recognition possible. Thanks are also owed to those loyal, trusting souls who have supported GV and the Berkman Center through funding and high-level guidance for this project, including Chris Ahearn and Dean Wright at Reuters, John Bracken at the MacArthur Foundation, Hivos, and others. The best still lies ahead for the GV community, and the GV experiment, I have no doubt.  (Here’s more, from NZ).

Gardner Museum's Podcast Series, The Concert

The Isabella Stewart Gardner Museum, one of Boston’s cultural gems, has released the first-of-its-kind museum concert series podcast, called The Concert. The good people there — including Catherine and Charlotte, who did a TV spot this morning — have decided to use a Creative Commons Share Music license. They’ve had the pro bono assistance of the Berkman clinical program in putting together this release. We’re proud to be associated with their innovative work to bring their music series to many more people than those who can attend in person at the appointed hour (though they highly encourage people to come to the Gardner to hear the concerts all the same!).

Bostonist and Cory at Boing Boing have more.

The Citizen Editor

In the past several weeks, I’ve been playing with a new format that my friends at TopTenSources developed. We’ve seen the Citizen Journalist; this idea is the Citizen Editor. Several of us have been using a new bookmarklet-style tool that makes it very easy to tag a story when you’re reading it, provide a bit of analysis, and have it posted to a dedicated website on the topic. It’s in many ways what lots of bloggers do anyway. I remember Dave Winer showing me an aspect of Manila that renders a river of news and then lets you check off stories that you want to appear somewhere — dead simple and fun; this idea is in the same vein, only using different tools and with a different output.

The one I’ve been playing with, as part of a group of “citizen editors,” is tracking the Massachusetts Governor’s Race. It’s a ton of fun. (There are a variety of perspectives among the group as to whom we support, as with most group blogs, I suppose.) As I read the utterly amazing and surging group of bloggers/MSM commentators — for instance, Blue Mass Group, GOPNews, Boston.com, Adam Reilly at the Phoenix, Kimberly Atkins at the Herald, and several dozen other blogs and news sources each day — I tag some of the best, most relevant sources and pop them into the aggregator for others to see. I think it’s pretty novel. The idea is that someone who is interested in the race, but not spending so much time in the details and reading every blog post, can come to a one-stop shop and scan the Editor’s Picks. Over time, the idea is to use a combination of technical tools to pick the most important stories from the most important sources with an editor or group of editors able to over-ride, make decisions about placement, and provide some context and editorial color. I think it’s pretty neat.

(My disclosures: I am a founder, am chairman of the Board, and hold equity in Top Ten Media, Inc., in my extra-Harvard capacity. And I am supporting Chris Gabrieli for governor and Deborah Goldberg for lieutenant governor of Massachusetts.)

Microsoft's Open Specification Promise

Microsoft has just unveiled a new commitment not to assert certain rights against people who develop code based on specifications that Microsoft has developed. It’s called the Open Specification Promise. Warning: the announcement itself, at the top of the page, is written in legalese, though probably pretty readable legalese. The FAQs make things a lot clearer for non-lawyer readers.

The upshot of this announcement is that it will hopefully turn out to be a Very Good Thing. Bravo to the lawyers and the policy people who no doubt worked very hard on it; the promise obviously reflects a huge amount of careful and open-minded thinking. The notion is that Microsoft agrees unilaterally not to come after people based on IP rights that the company holds with respect to a series of widely-used web services, such as SOAP and various of its progeny, WSDL, and so forth (all listed mid-way down the announcement page). From a geeky-lawyerly perspective, one of the things I like a lot is the fact that the requirement of availing oneself of the promise is yourself NOT to participate voluntarily in a patent infringement suit related to the same specification — commitments of this sort could help to create an anti-patent-thicket. (Maybe, down the road, this aspect of the promise might not prove to be as great as I think it could be, but for now, from here, it looks very appealing, in a detente kind of way.)

Why could this promise help? Any promise of forbearance by a huge player — where they say they won’t stand in the way of your innovating on top of the work of others — is certainly positive. More than that, such a promise that is made “irrevocably” establishes a commitment on the part of the company for the long haul. Set aside the legal enforceability of such a promise, the idea has enormous rhetorical force and would make it very hard for the company to backtrack and to go in another direction. Of course, the idea no doubt has good business judgment behind it in an era of dramatic growth in terms of the open development of web services, including those related to security and to web 2.0 apps.

Why might it not be so great? Well, I think it is a great thing, and not just because we at the Berkman Center have been looking into interoperability, with support from Microsoft and others, and learning more about how companies are taking novel steps in this sort of direction. Its limitation might take a few forms, I suppose. The promise itself has limitations — it applies to some specifications and the promise extends only to some possible IPR-related claims, of course, but that seems natural, especially with such a first step. Other possible limitations: 1) Will developers pay attention to it, and in fact believe it? 2) Will this promise itself be interoperable with other such promises? I am reminded of Prof. Lessig’s speech at Wikimania last month, when he talked about interoperable licenses. Hopefully, others will either follow this lead or help developers to understand how this meshes with other similar promises of forebearance in the marketplace. 3) I don’t know well enough whether these are the right specifications to be included in such a promise. Are there other specs that developers would like to see opened up in this fashion?