Hard Questions for #iLaw2011's Freedom of Information/Arab Spring Sessions

We’ve revived the iLaw program after a five-year hiatus. This year, it’s an experiment in teaching at Harvard Law School: part class (for about 125 students) and part conference (with friends from around the world here for the week). And JZ has taken the baton from Terry Fisher as our iLaw Chair.  An exciting day.

I’ve been preparing for two sessions on Day 1: “Freedom of Expression and Online Liberty” and then a case study on the Arab Spring (which will feature, among others, our colleague Nagla Rizk of the American University in Cairo). I’ve been thinking about some of the hard questions that I’m hoping we’ll take up during those sessions.

– What effect does a total shutdown of the network have on protests? I’ve been enjoying reading and thinking about this article on SSRN.  The author, Navid Hassanpour, argues (from the abstract): “I argue that … sudden interruption of mass communication accelerates revolutionary mobilization and proliferates decentralized contention.”

– We’ve assigned two chapters from Yochai Benkler’s landmark book, the Wealth of Networks (the introduction and the first 22 pages of chapter 7, which you can read freely online).  I am trying to figure out how well Yochai’s theoretical from a few years ago is holding up.  So far, so well, I think.  The examples in the second chapter that we assigned – Sinclair Broadcasting and Diebold – feel distant from the Arab Spring and Wikileaks examples that are front-of-mind today.  But the essential teachings seem to be holding up very well.  How might we add to the wiki, as it were, of WoN, knowing what we now know?  (Another way to look at this question, riffing off of something Yochai hits in his own lecture: what was the role of Al-Jazeera and other big media outlets, in combination with the amateur media and organizers?)

– We have gotten very good at studying some aspects of the Internet, as a network and as a social/political/cultural space.  We can show what the network of bloggers or Twitterers look like in a given linguistic culture.  We can show what web sites are censored where around the world (see the ONI).  We can survey and interview people about their online (and offline) behaviors.  But lots of things move very fast online and in digital culture, and it’s hard to keep up, in terms of developing good methods and deploying them.  What are the things that we’d like to be able to know about that we haven’t learned yet how to study?  Plainly, activity within closed networks like Facebook is a problem: lots is happening there, and surveys of users can help, but we can’t do much in terms of getting at Facebook usage patterns through technology (and there are privacy problems associated with doing so, even if we could).  Mobile is another: our testing of Internet filtering, for instance, is mostly limited to the standard web-browsing/http get request type of activity.  What else do we want/need to know empirically, to understand politics, activism, and democracy in a networked world?

– How much did the demographic element — a large youth population in several Middle East/North African cultures — matter, if at all, with respect to the Arab Spring?  How important were the skills, among elite youth primarily, to use social media as part of its organizing?

– How did the online organizing of the Arab Spring mesh with the offline activism in the streets?

– How much did the regional element matter, i.e., the domino quality to the uprisings?  Does this have anything to do with use of the digital networks, shared language, and social/cultural solidarity that crossed geo-political boundaries?

– What, if anything, does the Wikileaks story have to do with the Arab Spring story?  Larry Lessig pulls them quickly together; Nagla Rizk and Lina Attalah balk at this characterization.  We’ll dig in this afternoon.

– [Student-suggested topic #1, via Twitter:] What’s the effect of the US State Department’s Internet Freedom strategy?

– [Student-suggested topic #2, via Twitter:] Does the distribution/democratization of channels of discourse undercut rather than support dissent, organizing, etc.?

There’s much more to unpack, but these are some of the things in my mind…

Google in China

I’m looking forward to a day of watching the fallout from the Google-China-HK announcement yesterday. I give Google an enormous amount of credit for the approach that they are taking; it’s a worthy effort to meet what they consider their human rights obligations while seeking to engage in the China market, both of which are laudable. I’ll be surprised, though, if the Chinese government doesn’t decide fairly promptly to block the redirects from Google.cn to the uncensored Hong Kong site, though.  This chess-game also demonstrates the importance of (and challenges inherent in) the work of the Global Network Initiative, of which Google is a member, along with Microsoft and Yahoo!

(For more info: See generally the OpenNet Initiative site, blog, research papers, and so forth online.  There’s also a chapter on this issue, written by our colleague Colin Maclay, in the forthcoming OpenNet Initiative book called Access Controlled, due out within the month from MIT Press, as there is in our previous book, Access Denied, available online.  Here’s a piece in which I make a cameo on CNN on Google and China, one of many video-clips on this topic.  And Rebecca MacKinnon’s blog is always informative on these topics.)

Green Dam Implementation Delayed in China

Xinhua is reporting that the MIIT in China has decided to delay implementation of the Green Dam Youth Escort software program. (HT: Rebecca MacKinnon, who has been doing a terrific job documenting the proposed Green Dam regulation from the start on her blog.)

Much to their credit, leaders like Commerce Secretary Gary Locke of the Obama Administration have been pushing back on the proposed Chinese regulations on trade-related grounds.  Today’s announcement from Xinhua suggests that perhaps reason has prevailed and the push-back has been effective.

Here’s hoping implementation of the Green Dam software mandate(-or-is-it?) is indefinitely delayed. The software had all manner of problems, which we at the ONI, among others, documented. And the notion of requiring, or even just “strongly encouraging,” implementation of a given software program on all computers would set a terrible precedent in terms of the ability for the state to control communications of its citizens.

NYT story on Iran Elections and Technology, with Linkage to Green Dam

The New York Times’ Brian Stelter and Brad Stone have a very thoughtful piece in the paper today about the changing role of censorship in an Internet age, with references to ONI work. The final point, made in the story by Ethan Zuckerman, draws an appropriate connection to the Green Dam story in China from a few weeks ago.

Internet & Democracy: China, Iran, the Arabic Blogosphere

These are heady days for the study of Internet and its relationship to the practice of politics and the struggles over democratic decision-making. Three stories — in China, in Iran, and throughout the Arabic-speaking world — make a powerful case for the deepening relevance of the use of new technologies by citizens to the balance of political power around the world.

First, there was the Green Dam story. The Chinese government upped the ante in the Internet filtering business by announcing a new regulation on the providers of computer hardware. This regulation would require that new computers be shipped along with filtering software, the so-called Green Dam filtering software. We at the ONI released an analysis of this proposed software mandate. This story matters because having state-mandated software at the layer closest to the user would have an extraordinary chilling effect on the use of these technologies, not to mention the possibilities for censorship, surveillance, and other forms of control that such software would open up for the state. (Plus, there was an increase in censorship activity around June 4.)

Today, there is the crisis in Iran. At a moment of political upheaval, the key stories about what is happening on the ground is being told, and supplemented, by citizens on web 2.0 tools — blogs, Twitter, social networks, on sites like Global Voices. The State Department is reportedly working with Twitter to keep the service up — and the information flowing in and out from Iran, as traditional media find themselves more constrained than in other settings. I am imagining the conversation within the intelligence and diplomatic communities, and elsewhere in politics, about the value of this discourse and open source intelligence in general in these moments of crisis. If ever it were in doubt, I’d imagine today is helping to put many doubts to rest about the importance of this networked public sphere.

In the same spirit, tomorrow, we are releasing our study of the Arabic language blogosphere. The real-space, official session will take place at the United States Institute of Peace, as part of their wonderful “bullets to bytes” series. We’re delighted to have the chance to release our study with these terrific colleagues — and, together, to bust some myths about the networked public sphere in the Arabic world. The idea is to set forth a systematic, empirical study of the extraordinary public conversations we can observe in tens of thousands of blogs across the Arabic-speaking world.

What a week!

Spamdog Millionaire: Social Media Spam and Internet Filtering

Our friends at StyleFeeder have offered up some great data about the geographic sources of social media spam on their tech blog.  The background: Philip Jacob, the founder of StyleFeeder, is a long-time anti-spam advocate, while also being a careful guy who doesn’t want to ruin the Net in the process of fighting nuisance online.  At StyleFeeder, they are seeing a growing number of posts about illegal movie downloads, pharaceuticals, adn the usual spammy subjects.  Along with his colleagues, he’s developed a tool called Assassin to identify the source of the posts and get rid of them on the StyleFeeder site.  In the process, they’ve noticed that the vast majority comes from India (with the US next, Pakistan as a distant third, and China weighing in over 5% in fourth place).

The rest of the post examines a familiar ONI-style question: wouldn’t it be much easier for a US-based site simply to filter out users from India, Pakistan, and China, for instance?  After all, it’s a for-profit company, with no revenues being generated through these markets.  Much to their credit, Phil and co. are taking a different path.

Phil’s post ends with a great research question: “How widespread is this kind of blocking by startups who are susceptible to the armies of computer-literate Indian social media spammers? I’m wondering what other small companies do when faced with annoying users in countries that aren’t part explicitly part of their target markets. If our experience is representative, this challenge may be more widespread than most people realize.”  In the ONI world, we study state-mandated Internet filtering.  It’s a dream to be able to figure out how frequently corporate actors in one part of the world are filtering content in another on their own, for simple business reasons.

(My disclosures: I hold equity in Stylefeeder and am an unpaid member of its board of advisors.)

Leaked Cisco Document: Chinese Censorship among "Opportunities"

As WIRED is reporting, a leaked Cisco presentation (online here) makes clear that, in 2002, Cisco team members saw censorship in China as an opportunity to sell equipment to the state. The presentation, in slide 57, cites what appears to be a Chinese official saying that one of the goals of Operation Golden Shield (what we call the Great Firewall of China) is to “Combat ‘Falun Gong’ evil religion and other hostiles.”

Cisco has repeatedly said that it has nothing to account for with respect to its sales to China and other places that practice Internet filtering and surveillance. This leaked document (presuming it is not a forgery; Cisco does not seem to be disclaiming it) puts that argument to rest, once and for all.

Cisco has not been involved in the public effort by Microsoft, Yahoo!, Google and others to come up with a code of conduct for dealing with situations like these. Cisco should be involved. The fact that it is not involved, and that their involvement in this matter has been nothing but stonewalling, is inexcusable.

I have not been a supporter of passing a law like the Global Online Freedom Act in its current or historic form, because I think it would have too many unintended consequences.  But if Cisco persists in stonewalling on this topic, I think it’s necessary for the government to jump in at some point with respect to sales by US technology firms to foreign governments that practice Internet censorship and surveillance in the absence of a rule of law.

And Cisco should not be hiding behind the hollow argument that their routers and switches can be used both for good purposes and for ill when it is now clear that 1) many states around the world are using this type of equipment to violate human rights and 2) that they have not just made sales to such states, but in fact targeted these “opportunities” that derive from online censorship.

Turkey at the Edge

The people of Turkey are facing a stark choice: will they continue to have a mostly free and open Internet, or will they join the two dozen states around the world that filter the content that their citizens see?

Over the past two days, I’ve been here in Turkey to talk about our new book (written by the whole OpenNet Initiative team), called Access Denied. The book describes the growth of Internet filtering around the world, from only about 2 states in 2002 to more than 2 dozen in 2007. I’ve been welcomed by many serious, smart people in Ankara and Istanbul, Turkey, who are grappling with this issue, and to whom I’ve handed over a copy of the new book — the first copies I’ve had my hands on.

This question for Turkey runs deep, it seems, from what I’m hearing. As it has been described to me, the state is on the knife’s edge, between one world and another, just as Istanbul sits, on the Bosporus, at the juncture between “East and West.”

Our maps of state-mandated Internet filtering on the ONI site describe Turkey’s situation graphically. The majority of those states that filter the net extensively lie to its east and south; its neighbors in Europe filter the Internet, though much more selectively (Nazi paraphernalia in Germany and France, e.g., and child pornography in northern Europe; in the U.S., we certainly filter at the PC level in schools and libraries, though not on a state-mandated basis at the level of publicly-accessible ISPs). It’s not that there are no Internet restrictions in the states in Europe and North America, nor that these places necessarily have it completely right (we don’t). It’s both the process for removing harmful material, the technical approach that keeps the content from viewers (or stops publishers from posting it), and the scale of information blockages that differs. We’ll learn a lot from how things turn out here in Turkey in the months to come.

An open Internet brings with it many wonderful things: access to knowledge, more voices telling more stories from more places, new avenues for free expression and association, global connections between cultures, and massive gains in productivity and innovation. The web 2.0 era, with more people using participatory media, brings with it yet more of these positive things.

Widespread use of the Internet also gives rise to challenging content along with its democratic and economic gains. As Turkey looks ahead toward the day when they join the European Union once and for all, one of the many policy questions on the national agenda is whether and how to filter the Internet. There is sensitivity around content of various sorts: criticism of the republic’s founder, Mustafa Kemal Atatürk; gambling; and obscenity top the list. The parliament passed a law earlier in 2007 that gives a government authority a broad mandate to filter content of this sort from the Internet. To date, I’m told, about 10 orders have been issued by this authority, and an additional 40 orders by a court to filter content. The process is only a few months old; much remains to be learned about how this law, known as “5651,” will be implemented over time.

The most high-profile filtering has been of the popular video-sharing site, YouTube. Twice in the past few months, the authority has sent word to the 73 or so Turkish ISPs to block access, at the domain level, to all of YouTube. These blocks have been issued in response to complaints about videos posted to YouTube that were held to be derogatory toward the founder, Ataturk. The blocks have lasted about 72 hours.

After learning from the court of the offending videos, YouTube has apparently removed them, and the service has been subsequently restored. YouTube has been perfectly accessible on the connections I’ve had in Istanbul and Ankara in the past few days.

During this trip, I’ve been hosted by the Internet Association here, known as TBD, and others who have helped to set up meetings with many people — in industry, in government, in journalism, and in academia — who are puzzling over this issue. The challenges of this new law, 5651, are plain:

– The law gives very broad authority to filter the net. It places this power in a single authority, as well as in the courts. It is unclear how broadly the law will be implemented. If the authority is well-meaning, as it seems to me to be, the effect of the law may be minimal; if that perspective changes, the effect of the law could be dramatic.

– The blocks are (so far) done at the domain level, it would appear. In other words, instead of blocking a single URL, the blocks affect entire domains. Many other states take this approach, probably for cost or efficiency reasons. Many states in the Middle East/North Africa have blocked entire blogging services at different times, for instance.

– The system in place requires Internet services to register themselves with the Turkish authorities in order to get word of the offending URLs. This requirement is not something that many multinational companies are going to be able or willing to do, for cost and jurisdictional issues. Instead of a notice-and-takedown regimes for these out-of-state players, there’s a system of shutting down the service and restoring it only after the offending content has been filtered out.

* * *

The Internet – especially in its current phase of development – is making possible innovation and creativity in terms of content. Today, simple technology platforms like weblogs, social networks, and video-sharing sites are enabling individuals to have greater voice in their societies. These technologies are also giving rise to the creation of new art forms, like the remix and the mash-up of code and content. Many of those who are making use of this ability to create and share new digital works are young people – those born in a digital era, with access to high-speed networks and blessed with terrific computing skills, called “digital natives” – but many digital creators are grown-ups, even professionals.

Turkey is not alone in how it is facing this challenge. The threat of “too much” free expression online is leading to more Internet censorship in more places around the world than ever before. When we started studying Internet censorship five years ago, along with our colleagues in the OpenNet Initiative (from the Universities of Toronto, Cambridge, and Oxford, as well as Harvard Law School), there were a few places – like China and Saudi Arabia – where the Internet was censored.

Since then, there’s been a sharp rise in online censorship, and its close cousin, surveillance. About three dozen countries in the world restrict access to Internet content in one way or another. Most famously, in China, the government runs the largest censorship regime in the world, blocking access to political, social, and cultural critique from its citizens. So do Iran, Uzbekistan, and others in their regions. The states that filter the Internet most extensively are primarily in East Asia, the Middle East and North Africa, and Central Asia.

* * *

Turkey’s choice couldn’t be clearer. Does one choose to embrace the innovation and creativity that the Internet brings with it, albeit along with some risk of people doing and saying harmful things? Or does one start down the road of banning entire zones of the Internet, whether online Web sites or new technologies like peer-to-peer services or live videoblogging?

In Turkey, the Internet has been largely free to date from government controls. Free expression and innovation have found homes online, in ways that benefit culture and the economy.

But there are signs that this freedom may be nearing its end in Turkey, through 5651 and how it is implemented. These changes come just as the benefits to be reaped are growing. When the state chooses to ban entire services for the many because of the acts of the few, the threat to innovation and creativity is high. Those states that have erected extensive censorship and surveillance regimes online have found them hard to implement with any degree of accuracy and fairness. And, more costly, the chilling effect on citizens who rely on the digital world for their livelihood and key aspects of their culture – in fact, the ability to remake their own cultural objects, the notion of semiotic democracy – is a high price to pay for control.

The impact of the choice Turkey makes in the months to come will be felt over decades and generations. Turkey’s choice also has international ramifications. If Turkey decides to clamp down on Internet activity, it will be lending aid to those who seek to see the Internet chopped into a series of local networks – the China Wide Web, the Iran Wide Web, and so forth – rather than continuing to build a truly World Wide Web.

OpenNet Initiative on What Really Happened in Burma

Over the last few weeks, we’ve all witnessed the extraordinary bravery of protesters in Burma (or Myanmar, depending on whom you ask) and the great lengths to which the military junta has been willing to go to keep the world from knowing much about what was going on there. Many reported the story of how the junta “shut off” the Internet before they carried out some of the worst acts in the process of suppressing the demonstration. The ONI is today releasing a careful technical review that describes what in fact the military junta did, set in context of the demonstrations and the state’s history of Internet filtering. Stephanie Wang led the writing, and Shishir Nagaraja conducted the technical analysis. It’s the first time, with the exception of Nepal in 2005, that a state has sought to shut off access to the Internet altogether. The story of what they did, how, and when is fascinating, and upsetting, reading for anyone with an interest in the relationship between the Internet & democracy or the burgeoning citizens’ media movement.