A year’s research on the digital identity metasystem, led by Berkman fellow John Clippinger, culminates in the Identity Mash-up conference June 19 – 21. The participant and speaker lists are already taking excellent shape. The idea is to explore in depth the development of a federated system of digital identities and to explore ways in which this “meta-system” can be privacy-enhancing, giving users more control over their personal data and how it’s used, and fostering the development of a more accountable Internet (the last bit is my own editorializing; many won’t agree, I suppose). The event is sponsored by Microsoft, Best Buy, and others. Registration is open.
Happy coffee memories
John Bracken (Media SITREP), off the grid for 2 weeks, has used his absence to encourage a guest blogger, Yaucono, who writes of happy coffee memories: “Yaucono kept me awake, aware, soothed, and rooted in my culture and values in the midst of the most unsettling experience of my young life – not just the transition to college, but the transition to what was then an extremely alien culture. Hopefully, a Yaucono blog will remind me to approach topics with serenity instead of shock (good luck!).”
Bloggers as Celebrities: Too Cool for School?
The organizers of a conference I’m just leaving mentioned to me a curious fact: they invited 6 prominent bloggers — not to be named here (and I am certainly not including myself in this category) — to attend the event, called The Leaders Project. Not a single one responded, not even to RSVP “no.”
I was astonished. The group had fewer than 40 attendees, each of whom apparently had responded to the invitation: famous columnists, editors of major publications from around the world, generals and admirals, news anchors, presidents of major news networks, executive producers of shows everyone watches, members of Congress, leading activists from around the world, and even a few lowly academics. It was at an amazing venue, hosted by a former cabinet secretary and US Senator, and an unexpectedly rich, varied conversation. The topic was on the changing global media landscape, a topic that ordinarily would appeal, I’d think, to the serious blogger.
Why would bloggers be the one category not even to *reply* to the invitation? It got me to thinking that perhaps these bloggers are so sought after for conferences of this sort at the moment that they are overwhelmed with travel and the gab-fest circuit. Possible. But unfortunate if that’s so. This moment strikes as just the right time to be talking up the citizen-generated media movement, helping opinion leaders to understand and working through the issues and problems it raises or unearths. The blogging world has the attention of decision-makers everywhere. Now’s not the time to be too big for one’s britches — it’s the time to seize the moment. I suggested that maybe there are others to whom such an invitation should be extended next time. Maybe someone will set up a little speakers’ bureau for bloggers.
* * *
I’m at the Charlotte-Douglas airport, en route to Oxford Internet Institute for a research meeting with others from the OpenNet Initiative. Charlotte-Douglas has won my heart (as far as airports can win a heart) with free wifi in the main concourse. Very nice.
How Digital Natives Experience News
The process of experiencing news of those Born Digital – the Digital Natives — is famously different from the generations they succeed. DNs don’t read the New York Times or their local paper cover-to-cover over coffee in the morning, nor return home to hear the news read by Walter Cronkhite or Dan Rather (then discuss it around the dinner table or around the water cooler or at the pub or over bridge or at the Elk’s Lodge).
What is the process of news and information gathering for the DN? Here’s a
hypothesis. It’s a three-step process: Grazing, Deep(er) Dive(s), and the Feedback Loop.
It works, in the paradigmatic sense, like this:
1) Grazing: The citizen gets introduced to new facts through a process of grazing. The source of the facts might be Jon Stewart; it might be an RSS reader with aggregated news sources; it might by a My Yahoo! page or Google news or a PubSub alert; it might be a filtered set of news offerings served up to a Blackberry; it might be passively listening to radio in the car or a news channel at the gym from the seat of a recumbent bike; it might be from peers or blogs (of the Scripting News, Instapundit variety — at once prominent and generous with links) or Drudge; or any number of other introducers of facts, including offline. The net effect is that the citizen has the bare fact, or the headline, and perhaps a bit more (on the order of a paragraph), but no real context. The fact may not be verified and may prove to be false or misleading. In terms of the competition to provide this service, speed and relevance are the sole factors.
2) Deep Dive: The citizen makes her decision that she wants to go beyond the headline, to learn more about a topic beyond the basic fact she’s been exposed to. This is where the citizen goes to dig for context for the fact that’s been introduced. The citizen might choose the “channel” for this information because of celebrity (she likes a certain news anchor’s hair); politics (she likes a certain slant on the news); brand (a given source has a brand that appeals to her); or other reasons. The deep dive helps to make sense of the news, to put it into a frame, to offer an analysis of it, to introduce relevant other voices. This is where trust, branding, credibility come in. This is where news organizations, especially powerful and wealthy institutions — able to afford bureaus and the like — can add the most value. Some blogs fill this role, too — Global Voices might be an example. (Query: is there any reason why you wouldn’t want 1000, or 1,000,000, or n, “channels” at this level, so long as we’re able to discern and choose? See the Daily Me debate and the like for counter-arguments.) The key factor is not speed here, though timeliness is important; the key factors are accuracy, trustworthiness, insight/analysis, and relationship.
3) Feedback Loop: This stage is not for everyone, and is the hardest for traditionalists to grapple with, but an increasing number of citizens want to take another step and to engage more meaningfully with the fact and the context. It might mean blogging something yourself on an obscure blog (like this one!), creating your own podcast or vlog, or commenting on someone else’s blog or a wiki or bulletin board. Or send an e-mail to a listserv or to a network news program. The idea is to talk back — to act as an empowered citizen, able to have an impact on the way the story is told. This feedback loop may be taken seriously, or it may not, by others in the citizen generated media movement, by mainstream media, by decision-makers. It’s in theory good for participatory and semiotic democracy. The role of media in the feedback loop might be to provide an easy means to do it, or to serve as an aggregator by topic of multiple viewpoints from the broader community (loop back to the “deep-dive” step). The feedback loop might also involve taking local news and making it of broader relevance, to a non-local audience. The key factor is the ability to participate with the hopes of being heard, able to affect the outcome of the debate in some fashion, even if only for a few people.
Consider the feedback loop open.
Circumventing Internet filtering
At the Berkman Center, we don’t work on making tools that allow the circumvention of the Internet filtering that we study (along with collaborators at the University of Cambridge, Oxford Internet Institute, and the University of Toronto). But our partners at the University of Toronto do. Much anticipated: the announcement of Psiphon. Here’s the new FAQ about the forthcoming toolset.
The Leaders Project at White Oak
Emotional Legal Design
Urs Gasser, prepping to head out to a Gruter Institute event at Squaw Valley (tough life), wants to know if you agree:
“I suggest that in-depth and cross-disciplinary research in the field of law & emotion will soon be complemented by a discussion about what we might call ’emotional legal design’, i.e., a discourse about the design principles aimed at guiding the future development of a legal system that takes the findings of law & emotion research serious.”
(Gruter, and Urs’ center on information law at St. Gallen, are key partners of ours at Berkman.)
Latest badware report
Our project that names names of code that users should look out for, StopBadware, has released its latest instance of an application that falls outside of the scope of our guidelines. We think computer users should beware of the Jessica Simpson Screensaver (in case the name didn’t give it away to you already).
The idea of this project, in my view — yes, I am biased! — is a good one: we ask computer users to tell us about bad code they come across, or at least code they’re worried about. We put the info into a clearinghouse. Then, our team of researchers tests out the code. Where there’s smoke, we look closely to see if there’s fire. We work with a leading group of advisors and technology working group members to stay on the straight and narrow. And then we publish, periodically, what we come up with. The model comes from Jonathan Zittrain’s paper, The Generative Internet (required reading for those in our space, if you have yet to do so; forthcoming in the Harvard Law Review).
Over time, we hope that computer users will come to check with us before downloading an application, at least to know a bit better what they’re getting into (and downloading anyway, if they are not troubled by our findings). And, as in several cases so far in our first few months of operations, we hope that those of whom we write about will make adjustments, on their own, of their code according to our recommendations. We hope the net effect will be a better, safer, net and computer users who come to trust, with reason, the code that they decide to run on their PCs.
* * *
Today, back briefly in Cambridge, MA, for a working session on our Digital Learning project, funded kindly by the Mellon Foundation. We’ll have a white paper coming out of this project in coming months.
Re-Reading Negroponte, Being Digital (1995)
In preparing for the final lecture of a two-day seminar that Urs Gasser and I are teaching here at the University of St. Gallen, I was going back through one of the books that got me interested in Internet law in the first place — Nicholas Negroponte’s seminal book in atom form, Being Digital (1995).
A passage that spoke to me, on p. 20: “One way to look at the future of being digital is to ask if the quality of one medium can be transposed to another. Can the television experience be more like the newspaper experience? Many people think of newspapers as having more depth than television news. Must that be so? Similarly, television is considered a richer sensory experience than what newspapers can deliver. Must that be so?
“The answer lies in creating computers to filter, sort, prioritize, and manage multimedia on our behalf — computers that read newspapers and look at television for us, and act as editors when we ask them to do so. This kind of intelligence can live in two different places.
“It can live at the transmitter and behave as if you had your own staff writers — as if the The New York Times were publishing a single newspaper tailored to your interests. In this first example, a small subset of bits has been selected especially for you. The bits are filtered, prepared, and delivered to you, perhaps to be printed at home, perhaps to be viewed more interactively with an electronic display.
“The second example is one in which your news-editing system lives in the receiver and The New York Times broadcasts a very large number of bits, perhaps five thousand different stories, from which your appliance grabs a select few, depending on your interests, habits, or plans for that day. In this instance, the intelligence is in the receiver, and the dumb transmitter is indiscriminately sending all the bits to everybody.
“The future will not be one or the other, but both.”
He picks up the story of the newspaper industry again, on p. 56, noting how everything is created in bit form, then pressed onto atoms. Imagine if the head of a newspaper read Being Digital in 1995 and really listened? Maybe that’s what happened with Martin N. and co. at NYT Digital and a few others. But most clearly missed this lesson back then; I doubt many are missing it now.
Also, on copyrights, he nailed the vision of the trainwreck we’ve experienced in the late 1990s and early oughts (p. 58 ff.).
I think he gets a handful of things wrong, of course, but only at the margins — mainly, the reliance on machines, rather than humans, who I still think will play a key role, as the “web 2.0” people will tell you — but this book was astonishingly prescient. I’m not sure that he predicted quite the information quality problem that Urs is talking about right now, but then again, most people don’t focus on that even now.
Whether or not you first read it in 1995, it’s fun to read Being Digital today. (Then again, I learned last night from Prof. Dr. Herbert Burkert that you can only read 3,172 books in your life. I don’t know how re-reading fits into that calculation.) In any event, wildy impressive as a futuristic tale.
John Bracken's Beyond Broadcast round-up
Alas, I was in DC on Friday and Saturday, so I missed the celebration and serious set of inquiries of Beyond Broadcast. Fortunately for me and others who were not there, John Bracken, program officer at the MacArthur Foundation, has a set of highlights from the conference and its reverberations.