SOPA and our 2010 Circumvention Study

Daniel Castro of The Information Technology  & Innovation Fund recently published a paper supporting the Stop Online Privacy Act (SOPA) currently being debated in congress.  In that report, he claims that research performed by us supports the domain name system (DNS) filtering mechanisms mandated by SOPA.  This claim is a distortion of our work.  We disagree with the use of our study to make the point that DNS-based Internet filtering works and that we should therefore use it as a means of stopping websites from distributing copyrighted content.  The data we collected answer a completely different set of questions in a completely different context.

Among other provisions that seek to control the sharing of copyrighted material on the Internet, SOPA, if enacted, would call upon the U.S. government to require that Internet service providers remove from their DNS servers the names of any sites that either infringe copyright directly or merely “facilitate” copyright infringement.  So, for example, the government could require that ISPs remove the name “twitter.com” from their DNS servers if twitter.com was not being sufficiently aggressive in preventing its users from tweeting information about places to download copyrighted materials.  This practice is known as DNS filtering.  DNS filtering is one of the most common modes of Internet-based censorship.  As we and our collaborators in the OpenNet Initiative have shown over the past decade, practices of this sort are used extensively in autocratic countries, including China and Iran, to prevent access to a range of sites offensive to the governments of those countries.

Opponents of SOPA have argued that the DNS filtering, even though it will have a number of harmful effects on the technical and political structure of the Internet, will not be effective in preventing users from accessing the blocked sites.  Mr. Castro cites our research as evidence that SOPA’s mandate to filter DNS will be effective.  He quotes our finding that at most 3% of users in certain countries that substantially filter the Internet use circumvention tools and asserts that “presumably the desire for access to essential political, historical, and cultural information is at least equal to, if not significantly stronger than, the desire to watch a movie without paying for it. Yet only a small fraction of Internet users employ circumvention tools to access blocked information, in part because many users simply lack the skills or desire to find, learn and use these tools.”

In our report, we looked at three sets of censorship circumvention tools: complex, client-based tools like Tor; paid VPNs; and web proxies.  We estimated usage of those three classes of tools. We used reports from the client tool developers, a survey to gather usage data from VPN operators and used data from Google Analytics to estimate usage of web proxy tools. Counting all three classes of tools, we estimated as many as 19 million users a month of circumvention tools. Given the large number of users in China, Iran, Saudi Arabia and other states where filtering is endemic, this represents a fairly small percentage of internet users in those countries; 19 million people represents about 3% of the users in countries where internet filtering is pervasive.  We actually believe that 3% figure is high, as some of the tools we study are used by users in open societies to evade corporate or university firewalls, not just to evade government censorship.

We stand behind the findings in our study (with reservations that we detail in the paper), but we disagree with the way that Mr. Castro applies our findings to the SOPA debate.  His presumption that people will work as hard or harder to access political content than they do to access entertainment content deeply misunderstands how and why most people use the internet.  Far more users in open societies use the Internet for entertainment than for political purposes; it is unreasonable to assume different behaviors in closed societies. Our research offers the depressing conclusion that comparatively few users are seeking blocked political information and suggests that the governments most successful in blocking political content ensure that entertainment and social media content is widely available online precisely because users get much more upset about blocking the ability watch movies than they do about blocking specific pieces of political content.

Rather than comparing usage of circumvention tools in closed societies to predict the activities of a given userbase, Mr. Castro would do better to consider the massive userbase of tools like bit torrent clients, which would make for a far cleaner analogy to the problem at hand.  Likewise, the long line of very popular peer-to-peer sharing tools that have been incrementally designed to circumvent the technical and political measures used to prevent sharing copyrighted materials are a stronger analogy than our study of users in authoritarian regimes seeking to access political content.

Second, our research has consistently shown that those who really wish to evade Internet filters can do so with relatively little effort.  The problem is that these activities can be very dangerous in certain regimes.  Even though our research shows that relatively few people in autocratic countries use circumvention tools, this does not mean that circumvention tools are not crucial to the dissident communities in those countries.  19 million people is not large in relation to the population of the Internet, but it is still a lot of people absolutely who have freer access to the Internet through the tools.  We personally know many people in autocratic countries for whom these tools provide a crucial (though not perfect) layer of security for their activist work.  Those people would be at much greater risk than they already are without access to the tools, but in addition to mandating DNS filtering, SOPA would make many circumvention tools illegal.  The single biggest funder of circumvention tools has been and remains the U.S. government, precisely because of the role the tools play in online activism.  It would be highly counter-productive for the U.S. government to both fund and outlaw the same set of tools.

Finally, our decade-long study of Internet filtering and circumvention has documented the many problems associated with Internet filtering, not its overall effectiveness.  DNS filtering is by necessity either overbroad or underbroad; it either blocks too much or too little.  Content on the Internet changes its place and nature rapidly, and DNS filtering is ineffective when it comes to keeping up with it.  Worse, especially from a First Amendment perspective, DNS filtering ends up blocking access to enormous amounts of perfectly lawful information.  We strongly resist the claim that our research, and that of our collaborators, makes the case in favor of DNS-based Internet filtering.

Links:

Mr. Castro’s report may be found here:

http://www.itif.org/publications/pipasopa-responding-critics-and-finding-path-forward

with the reference to our work on p. 8.

The study that is being misused by Mr. Castro is here:

http://cyber.law.harvard.edu/publications/2010/Circumvention_Tool_Usage.

The  findings of our decade-long studies are documented in three books, published MIT Press and available freely online in their entirety at:

http://access.opennet.net/

– Rob Faris, John Palfrey, Hal Roberts, Jill York, and Ethan Zuckerman

DDoS Report, in the Wake of Wikileaks, Cablegate, and Anonymous

The Wikileaks/Cablegate story has long-term implications for global society on very many levels.  (See JZ’s excellent FAQ on Wikileaks, co-developed with Molly Sauter.)  One is our shared understanding of the Distributed Denial of Service (DDoS) attack phenomenon.  The incidence of DDoS has been growing in recent years.  It links up to important threads to emerge from our OpenNet Initiative work in studying the ways in which states and others exert measures of control on the open Internet.  (Consider, for instance, the reports from ONI on Belarus and Kyrgyz election monintoring, which broke new ground on DDoS a few years ago, led primarily by our ONI partners Rafal Rohozinski, Ron Deibert, and their respective teams).

We are issuing a new report on DDoS today, which we hope will help to put some of these issues into perspective.  For an excellent blog entry on it, please see my co-author Ethan Zuckerman’s post.

After initial publication of State Department cables, Wikileaks reported that their web site became subject to a series of DDoS attacks that threatened to bring it down.  These attacks are simple in concept: multiple computers from around the world request access to the target website in sufficient numbers to make the site “crash.”  It turns out to be hard for most systems administrators to defend against such an attack.  And it turns out to be relatively easy to launch such an attack.  Computers that have been compromised, through the spread of computer viruses, are available for “rent” in order to launch such attacks. In a study that we are releasing this morning, we found instances where the “rent” of these machines is suggested by the round numbers of attacking machines and the precise durations of the attacks.

In the face of these attacks, Wikileaks decided to move its web site to safer ground.  Large-scale web hosts, particularly “cloud computing” service providers, can resist DDoS attacks.  Wikileaks did what one might reasonably suggest to, say, a small human rights organization in an authoritarian regime, where they fear attack from the state or others.  Wikileaks moved to the Amazon.com cloud.  Shortly thereafter, apparently in the face of pressure, Amazon decided to stop serving Wikileaks’ web site, and cut them off.  Wikileaks found a “James Bond-style” bunker in Sweden which agreed to host them — presumably despite pressure to take the site down.

The DDoS story took another major turn in the Wikileaks narrative when Anonymous launched a series of attacks on sites perceived to have been unhelpful to Wikileaks in the post-Cablegate aftermath.  These DDoS attacks raised the specter of cyberwarfare, much discussed in policy circles but all of a sudden on the front page of major newspapers.  Depending on political viewpoint and other factors, people I’ve talked to seemed to see these retribution DDoS attacks as different in their implications from the initial DDoS attacks on Wikileaks itself.

There have been relatively few studies of DDoS as an empirical or a policy matter.  We are releasing a report today, (which I’ve co-authored with Hal Roberts, Ethan Zuckerman, Jillian York, and Ryan McGrady), that describes DDoS and makes a series of recommendations in light of what we’ve found.  It’s funded by a generous grant from OSI.  Regardless of whether you consider DDoS to be criminal behavior, the next wave in cyberwarfare, an acceptable form of protest, or all of the above, we hope you’ll read and give feedback on the report.

Google in China

I’m looking forward to a day of watching the fallout from the Google-China-HK announcement yesterday. I give Google an enormous amount of credit for the approach that they are taking; it’s a worthy effort to meet what they consider their human rights obligations while seeking to engage in the China market, both of which are laudable. I’ll be surprised, though, if the Chinese government doesn’t decide fairly promptly to block the redirects from Google.cn to the uncensored Hong Kong site, though.  This chess-game also demonstrates the importance of (and challenges inherent in) the work of the Global Network Initiative, of which Google is a member, along with Microsoft and Yahoo!

(For more info: See generally the OpenNet Initiative site, blog, research papers, and so forth online.  There’s also a chapter on this issue, written by our colleague Colin Maclay, in the forthcoming OpenNet Initiative book called Access Controlled, due out within the month from MIT Press, as there is in our previous book, Access Denied, available online.  Here’s a piece in which I make a cameo on CNN on Google and China, one of many video-clips on this topic.  And Rebecca MacKinnon’s blog is always informative on these topics.)

NYT story on Iran Elections and Technology, with Linkage to Green Dam

The New York Times’ Brian Stelter and Brad Stone have a very thoughtful piece in the paper today about the changing role of censorship in an Internet age, with references to ONI work. The final point, made in the story by Ethan Zuckerman, draws an appropriate connection to the Green Dam story in China from a few weeks ago.

ONI Releases Green Dam Software Analysis

At the OpenNet Initiative, we’ve spent much of this week looking hard at the Chinese Green Dam software that the state is asking all PC manufacturers to ship with their hardware. The analysis highlights — and confirms — a variety of problems with the software.

As we argue in this ONI Bulletin, this announcement is a big deal and augurs poorly for the development of the Internet and its usage in China.  “As a policy decision, mandating the installation of a specific software product is both unprecedented and poorly conceived. In this specific instance, the mistake is compounded by requiring the use of a substandard software product that interferes with the performance of personal computers in an unpredictable way, killing browsers and applications without warning while opening up users to numerous serious security vulnerabilities. The level of parental control over the software is poor such that this software does not well serve parents that wish to the limit exposure of their children to Internet content.”

Spamdog Millionaire: Social Media Spam and Internet Filtering

Our friends at StyleFeeder have offered up some great data about the geographic sources of social media spam on their tech blog.  The background: Philip Jacob, the founder of StyleFeeder, is a long-time anti-spam advocate, while also being a careful guy who doesn’t want to ruin the Net in the process of fighting nuisance online.  At StyleFeeder, they are seeing a growing number of posts about illegal movie downloads, pharaceuticals, adn the usual spammy subjects.  Along with his colleagues, he’s developed a tool called Assassin to identify the source of the posts and get rid of them on the StyleFeeder site.  In the process, they’ve noticed that the vast majority comes from India (with the US next, Pakistan as a distant third, and China weighing in over 5% in fourth place).

The rest of the post examines a familiar ONI-style question: wouldn’t it be much easier for a US-based site simply to filter out users from India, Pakistan, and China, for instance?  After all, it’s a for-profit company, with no revenues being generated through these markets.  Much to their credit, Phil and co. are taking a different path.

Phil’s post ends with a great research question: “How widespread is this kind of blocking by startups who are susceptible to the armies of computer-literate Indian social media spammers? I’m wondering what other small companies do when faced with annoying users in countries that aren’t part explicitly part of their target markets. If our experience is representative, this challenge may be more widespread than most people realize.”  In the ONI world, we study state-mandated Internet filtering.  It’s a dream to be able to figure out how frequently corporate actors in one part of the world are filtering content in another on their own, for simple business reasons.

(My disclosures: I hold equity in Stylefeeder and am an unpaid member of its board of advisors.)

Leaked Cisco Document: Chinese Censorship among "Opportunities"

As WIRED is reporting, a leaked Cisco presentation (online here) makes clear that, in 2002, Cisco team members saw censorship in China as an opportunity to sell equipment to the state. The presentation, in slide 57, cites what appears to be a Chinese official saying that one of the goals of Operation Golden Shield (what we call the Great Firewall of China) is to “Combat ‘Falun Gong’ evil religion and other hostiles.”

Cisco has repeatedly said that it has nothing to account for with respect to its sales to China and other places that practice Internet filtering and surveillance. This leaked document (presuming it is not a forgery; Cisco does not seem to be disclaiming it) puts that argument to rest, once and for all.

Cisco has not been involved in the public effort by Microsoft, Yahoo!, Google and others to come up with a code of conduct for dealing with situations like these. Cisco should be involved. The fact that it is not involved, and that their involvement in this matter has been nothing but stonewalling, is inexcusable.

I have not been a supporter of passing a law like the Global Online Freedom Act in its current or historic form, because I think it would have too many unintended consequences.  But if Cisco persists in stonewalling on this topic, I think it’s necessary for the government to jump in at some point with respect to sales by US technology firms to foreign governments that practice Internet censorship and surveillance in the absence of a rule of law.

And Cisco should not be hiding behind the hollow argument that their routers and switches can be used both for good purposes and for ill when it is now clear that 1) many states around the world are using this type of equipment to violate human rights and 2) that they have not just made sales to such states, but in fact targeted these “opportunities” that derive from online censorship.

Testimony on Internet Filtering and Surveillance

Mister Chairman, distinguished members of the Committee:

I would like to offer my deep appreciation for the Committee’s interest in this important matter. Congressional engagement is an important factor in deepening understanding of the nexus between global Internet freedom and corporate responsibility, and an essential element for ensuring that the Internet continues on its path towards becoming an ever-greater force for democratic participation and human rights advancement worldwide.

My name is John Palfrey. I teach Internet law at Harvard Law School. My primary research interest is in examining issues related to the Internet and democracy. I am also Executive Director of the Berkman Center for Internet and Society. Of relevance to this hearing, I am a Principal Investigator of the OpenNet Initiative (ONI), a project based at the University of Toronto, the University of Cambridge, the Oxford Internet Institute, and Harvard Law School, that has been conducting research and analysis of Internet censorship, filtering, and surveillance practices worldwide. I submit this testimony along with my colleague, Colin Maclay, Managing Director of the Berkman Center. Together with other great colleagues at Berkman, we have spent over two years on a multi-stakeholder effort—involving companies, non-profits, socially responsible investors, and other academics—to develop principles and associated implementation measures for technology companies seeking to protect and advance privacy and free expression worldwide.

The strides made through this initiative—engaging a range of parties, deepening understanding of the complexity of the issues for each stakeholder, and working towards a viable solution—have been encouraging. I would urge you to support the recommendations generated by this process, in lieu of strong legislation at this time. As this testimony will demonstrate, due to the dynamic nature of the ICT sector and the complexities of the existing regulatory environment, legal regimes cannot adequately address the dilemmas posed by the rise of global filtering, censorship, and surveillance practices worldwide, and are unlikely to be capable of doing so in the near term. Furthermore, the proposals currently being considered could be harmful in the long run, by forcing organizations out of foreign countries altogether or by requiring them to break local laws. At this moment of dynamic change, it would be premature to act now with blunt legislation. Rather, there are several activities which the US government could support and contribute to, such as constructive policy engagement, collaborative learning, multi-stakeholder input and commitment, further technological innovation, and user empowerment, that could have immediate impact not only on our understanding of the landscape, but on our ability to positively contribute to protecting the human rights that are at risk. Furthermore, with practical implementation and global acceptance, the principles that arise from this multi-stakeholder initiative may merit codification by Congress in the relatively near future.

Current State of Affairs and Trends

Since I last testified in February 2006 before the House Subcommittee on Africa, Global Human Rights, and International Operations and the Subcommittee on Asia and the Pacific, and the Congressional Human Rights Caucus, the prevalence of Internet censorship has continued to grow in scope and in depth. Our research through the ONI has identified over two-dozen states actively filtering Internet content, up from a handful five years ago. As access to information and communications technologies (ICTs) increases further, this trend seems likely to continue.

Technological innovations have fueled the expansion of Internet filtering and censorship, enhancing their sophistication and consequently creating troubling implications for human rights. Recent research suggests that several countries are investing in technologies that increase their capacity to target specific web pages, information sources, and applications. Surveillance technologies are likewise advancing, offering states expanded opportunities to eavesdrop on the communications of their citizens. Meanwhile, systems for storing and analyzing data continue to decline in cost, which allow governments to extract new information from existing data originally collected for other purposes.

A related and significant development is the growth of social media (including video and photo-sharing sites such as YouTube and Flickr among others), which significantly amplifies—and further complicates— unresolved tensions concerning content control. As these platforms are combined with other emerging technologies for content analysis, new censorship and privacy concerns will emerge.

Conflicts between differing expectations of privacy, data retention laws and practices, in addition to divergent approaches to traditional telecommunications and Internet communications regulation, give rise to increasingly hard problems. For example, Internet filtering and surveillance involves hardware providers, software providers, and service providers, and US firms are not the only companies offering these products and services. These factors remind us that issues of Internet freedom are part of a much larger policy and technology ecosystem, and require care accordingly.

The Corporate Dilemma

With over a billion people on the Net and about half the world with a mobile phone, more people than ever are using digital technologies and integrating them deeply into their lives and livelihoods. Governments are ever more cognizant of the double-edged sword that technology represents— as both a tool to foster economic growth and competitiveness, and as a potential threat to government sovereignty and power. As governments seek to control information and online activities, private actors, including ICT-related firms, are increasingly called upon to assist in carrying out those efforts.

In our recent book with our ONI partners, Access Denied: The Practice and Policy of Global Internet Filtering, we proposed a taxonomy that describes various types of companies and their involvement in these practices. We identified ICT firms as hardware providers, software providers, online service providers, online publishers, telecommunications providers, and other content providers. Describing them in terms of function, we characterized their activities as direct sales to governments of software and services for filtering online content and for surveillance; direct sales to governments of dual-use technology similar purposes; and offering a service that is subject to censorship, that censors publications, or requires personal information that could be subject to surveillance. Considering these companies functionally is a useful way to examine their activities.

In past hearings, proposed legislation, and the public eye, perhaps the greatest focus has been placed on the activities of the most visible and widely known companies—those in the third category, offering online services. These companies, including Google, Microsoft, and Yahoo!, have shown sustained interest in resisting government demands to assist with censorship and surveillance, and a desire to engage proactively in developing strategies to address the human rights challenges they face. It is important to note that for each of these companies, a core business goal is to provide access to high-quality and secure information and communications services, and that their incentives are thus better aligned with the interests of their users than those of repressive governments.

Within this landscape, it is important not to neglect the companies selling software and hardware directly to governments, as they too form an important layer of the censorship and surveillance ecosystem, and have thus far been relatively silent on these issues. In addition, there are a host of other US businesses that use the Internet to transmit data across borders —from banking and other financial services, technology licensing, news media, and hotel services— each of which may come into contact with government policies on free expression and privacy as they operate in different countries and across jurisdictions. In this testimony, we focus primarily on those who provide online services, because that is where we can lend the greatest insight, precisely because these companies have been willing to jointly explore the obstacles they face.

Conflicting law and dual purpose technologies

Mapping digital technologies onto the governance gaps created by globalization—and identified in the fine work of John Ruggie, our colleague at the Harvard Kennedy School and the UN Special Representative on Business and Human Rights— creates multiple conflicting legal and normative regimes for companies to navigate. Governments may regard companies providing online services to their citizens as similar to their own national media and telecommunications companies—and therefore subject to the same expectations—regardless of the law of the company’s country, its market orientation, or its physical presence in the country. They may expect these companies to adhere to laws and social norms about content parameters (ranging from intellectual property to pornography and national security), and to provide personal information about their users when requested for law enforcement purposes. Some governments have also shown a lack of understanding of how the Internet works—and what is realistically under the control of a company, and what, such as user-generated content, is not.

Companies face a huge challenge as they seek to separate legitimate state requests from those that would require them to abridge human rights. For example, they must discern the difference between claims related to ongoing criminal cases, including kidnapping, terrorist threats, or child pornography, and those that seek to limit fundamental rights by stopping the flow of relevant public information or staunching peaceful political opposition. Thus, a priority must be the creation of effective internal systems, to enable thoughtful assessments of these types of requests, and to ensure that their responses are nuanced and appropriate, protective of the rights of specific citizens in addition to the rights to expression and privacy.

Once a company comes to a decision regarding the legitimacy of the request, it must also consider the consequences of complying or not complying. Acquiescence to illegitimate requests may cause them to jeopardize their social and economic values by abridging core human rights. They may also incur risks such as losses in user confidence, brand identity, profit, and employee satisfaction, as well as the threat of legal (including shareholder) action. However, choosing to push back or initiate legal action can also generate risks. In choosing to resist law enforcement demands, companies may endanger operating licenses and institutional relationships, and more importantly, the potential safety of their employees on the ground. In the case of ill-chosen resistance, the risk can be broader, extending to public safety and beyond.

Public Awareness, Pressure, and Understanding

Public awareness of these issues continues to grow. High profile violations of the rights to expression and privacy, shareholder actions, human rights campaigns, academic analysis, and Congressional interest have kept the pressure on. Companies are increasingly aware that the challenges they face are real and lasting and require a concerted and sustained effort in order to confront them effectively. The value of this rising awareness, however, will be greatest if accompanied by a deep understanding of the issues, so as to create robust and lasting solutions.

The cases that attract public attention are often extreme examples of the challenges ICT companies face. For example, China’s censorship, manipulation, and detention practices are a real and immediate danger. However, associated media coverage does not span the range of issues but instead directs public attention to the problems that are the most straightforward to address. High profile cases are deeply unsettling at best, but they are closer to the sharp and menacing tip of the iceberg rising above the waterline than they are to the substantial and complicated dangers lying below it. The threat to digital expression and privacy is global and extends well beyond what is commonly reported, and the practices of any one state should not dominate our understanding and approach to solutions. We must opt to address the complexities of these issues that lie beyond the public eye, and bring them to light with greater transparency and accurate data. From that understanding, we have a much stronger platform upon which to develop solutions that engage the wide range of stakeholders necessary to affect change.

Constructive Engagement

Despite the substantial human rights challenges that the ICT sector faces, the continued presence and constructive engagement of technology companies in these markets is critical. The tools and services offered by ICT companies bring social, economic, and political value through increased information and communication and through improved business and cross-cultural connections. They also hold great promise for international development. Furthermore, American businesses can influence positively the practice of government and local businesses, bring greater transparency to interactions that are often opaque, and provide a continued platform for informed government-to-government and government-to-individual exchanges. A collaborative approach in which stakeholders create principles for operating in such regimes will, over time, generate opportunities for mutual learning, respectful exchange of views, and more effective solutions.

Conversely, the disengagement of these stakeholders from foreign markets through legislative would likely not improve the situation. Competitors to the US companies are on the rise, and placing limitations on the engagement of US firms in these markets runs a very real risk of simply handing them to other companies who may be less open to constructive influence and may have a lower commitment to human rights. Thus, rather than focusing on limiting opportunities for US corporate activities, it is important to address challenges to privacy and free expression so as to have a positive and sustained global impact on the behavior of companies based both in the US and around the world, as well as having a positive impact on the regulatory environment in which these companies operate overseas.

In an industry in which rapid change, innovation and evolution dictate that these dilemmas will remain a moving target, and subject to shifting technologies, business models, regulations and politics, the creation of an adaptive platform is essential. These multi-faceted scenarios suggest the wisdom of establishing a collaborative forum for multiple stakeholders— including government, nonprofit, academics, and business— to come together for learning, coordinated action, increased transparency, innovation, and enhanced channels of communication, to promote a nuanced understanding that will benefit all stakeholders. This process has been started, and would benefit from broad support.

Recommendations on a Starting Point

Over the past two years, in partnership with the Center for Democracy and Technology and Business for Social Responsibility, in addition to other academic institutions, human rights groups, socially-responsible investors, and leading ICT firms, the Berkman Center has been involved in a collaborative initiative designed to identify solutions to the problems related to freedom of expression and privacy online.

As the Committee recognizes, these matters are complex. After two years of deliberation and study, we understand more clearly the nuances and complexities of the issues. However, we are still far from defining solutions to these growing challenges. Furthermore, we believe that legislative action now that would prescribe what US companies can and can not do overseas would be premature and potentially damaging to the long-term objective of promoting greater freedom online.

This process represents a promising way forward, one that we believe will ultimately inform legislation and serve as a productive means of interaction with government. It calls on companies to develop a dynamic principles-based approach to ensuring that they operate ethically, consistently, and strategically (for human rights advancement) in these charged contexts, with an emphasis on strong internal rights-focused processes that are supported and informed by group collaboration. While the Principles, Implementation Guidelines and governance structure are as yet not finalized, we expect that agreement and initiation of collaboration will take place in fall 2008.

It is important that any legislation not be tailored so broadly as to attempt to confront every issue and actor with one set of rules, but neither should the law address one set of issues and ignore the others. A better approach is to promote the learning and deeper understanding that would lay the foundation for future legislation, ideally in conjunction with the aforementioned Principles process.

If the Principles that are currently being developed in the context of the multi-stakeholder process are implemented, grow in stability, and gain acceptance, they will be a good basis for future legislation to codify and bolster the norms that emerge.

We offer the following for your consideration, many of which have emerged from the Principles initiative:

1. Support Research, Learning and Awareness

Contribute knowledge and resources to improve understanding of online censorship, filtering, and surveillance practices. Facilitate the preparation of annual human rights reports that include assessments of the risks to freedom of expression and privacy with respect to ICT. Fund research into relevant legal regimes, events, and trends in Internet freedom, and make the results publicly accessible.

2. Create Alternative Paths

Fund and promote the development and dissemination of innovative technologies that promote Internet freedom. Contribute to education and awareness regarding online security.

Explore options for structured cooperation with foreign law enforcement by creating or adhering to a recognized, standardized and streamlined process for legitimate requests for information from US companies, such that companies have guidance on the appropriate course of action, and pressure on companies to physically locate data in certain jurisdictions is mitigated.

3. Build Partnerships and Enhance Coordination

Create regular opportunities for open exchange between the ICT sector, human rights organizations, academic researchers, and the US government. Consistently and strategically raise concerns about surveillance and censorship in appropriate international bi- and multi-lateral fora.

4. Create Incentives

The current multi-stakeholder initiative is a promising near-term approach to understanding and addressing the challenges faced by US companies providing services internationally via the Internet. The US government can best assist this effort by providing incentives to cooperate with this multi-stakeholder effort, and should avoid legal restrictions or penalties that could discourage cooperation.

Promote the compilation and sharing of information. Facilitate the sharing of information by companies on threats to free expression and privacy. Assist companies in tracking threats to free expression and privacy.

Recognize and reward legal, practical, organizational and technical progress on these issues by countries, companies and other innovators.

5. Lead the Way

The US government can help to facilitate change in policy regimes worldwide by closely examining our own regime and then sharing resources with other countries willing to follow our lead.

Identify and address inconsistencies in US policy including privacy, data retention, surveillance, anonymity and speech, recognizing that a holistic US policy framework informs related approaches in other nations.

Assist countries in clarifying and improving their policy regimes with respect to ICT generally, and privacy and expression specifically.

6. Foster Transparency

In order to address fully the challenges in this sphere, we should encourage companies to be more transparent about the impact of their policies and practices on rights of privacy and freedom of expression. There are a number of ways that these companies can make their actions more transparent to users, more protective of civil liberties, and more accountable to all of us.

Encourage US companies to inform users about content restrictions or threats to privacy in a clear and timely manner, recognizing legal restrictions.

7. Codify the Principles

To extent that the multi-stakeholder Principles initiative leads to a workable solution, the US Congress should consider legislating this approach over time, much as Congress did with regard to the Sullivan Principles.

Conclusion

The Internet has the capacity to foster active and participatory democracies around the world, and to advance and protect the human rights of expression and privacy. The rise of filtering, censorship, and surveillance practices worldwide has profound implications for the global development, proliferation and health of democratic values—such as privacy, access to information, participation, freedom of expression, and other human rights. Because the Internet is a truly global network that shows no sign of slowing down, the ramifications of restrictions within the online space should be of paramount concern to US policy-makers, and should inform their relationships and negotiations with governments worldwide. We support Congress’ laudable effort to improve understanding of these important and timely issues.

There are significant challenges and complex ethical dilemmas across this landscape for corporations, governments, and users. At this relatively early stage of our understanding, any legislative approach should support adaptive, realistic, and engagement-oriented efforts by companies operating in these contexts. We must buttress this legislative approach with increased knowledge, communication, study, and coordination to help turn back threats to human rights. Ultimately, while the measures we and others have offered will hopefully increase Internet freedom, the only truly reliable way to reduce excessive filtering and inappropriate surveillance is via a change of policy within the countries where this occurs.

Written testimony of John Palfrey with Colin Maclay, May 20, 2008, to the US Congress.

Turkey at the Edge

The people of Turkey are facing a stark choice: will they continue to have a mostly free and open Internet, or will they join the two dozen states around the world that filter the content that their citizens see?

Over the past two days, I’ve been here in Turkey to talk about our new book (written by the whole OpenNet Initiative team), called Access Denied. The book describes the growth of Internet filtering around the world, from only about 2 states in 2002 to more than 2 dozen in 2007. I’ve been welcomed by many serious, smart people in Ankara and Istanbul, Turkey, who are grappling with this issue, and to whom I’ve handed over a copy of the new book — the first copies I’ve had my hands on.

This question for Turkey runs deep, it seems, from what I’m hearing. As it has been described to me, the state is on the knife’s edge, between one world and another, just as Istanbul sits, on the Bosporus, at the juncture between “East and West.”

Our maps of state-mandated Internet filtering on the ONI site describe Turkey’s situation graphically. The majority of those states that filter the net extensively lie to its east and south; its neighbors in Europe filter the Internet, though much more selectively (Nazi paraphernalia in Germany and France, e.g., and child pornography in northern Europe; in the U.S., we certainly filter at the PC level in schools and libraries, though not on a state-mandated basis at the level of publicly-accessible ISPs). It’s not that there are no Internet restrictions in the states in Europe and North America, nor that these places necessarily have it completely right (we don’t). It’s both the process for removing harmful material, the technical approach that keeps the content from viewers (or stops publishers from posting it), and the scale of information blockages that differs. We’ll learn a lot from how things turn out here in Turkey in the months to come.

An open Internet brings with it many wonderful things: access to knowledge, more voices telling more stories from more places, new avenues for free expression and association, global connections between cultures, and massive gains in productivity and innovation. The web 2.0 era, with more people using participatory media, brings with it yet more of these positive things.

Widespread use of the Internet also gives rise to challenging content along with its democratic and economic gains. As Turkey looks ahead toward the day when they join the European Union once and for all, one of the many policy questions on the national agenda is whether and how to filter the Internet. There is sensitivity around content of various sorts: criticism of the republic’s founder, Mustafa Kemal Atatürk; gambling; and obscenity top the list. The parliament passed a law earlier in 2007 that gives a government authority a broad mandate to filter content of this sort from the Internet. To date, I’m told, about 10 orders have been issued by this authority, and an additional 40 orders by a court to filter content. The process is only a few months old; much remains to be learned about how this law, known as “5651,” will be implemented over time.

The most high-profile filtering has been of the popular video-sharing site, YouTube. Twice in the past few months, the authority has sent word to the 73 or so Turkish ISPs to block access, at the domain level, to all of YouTube. These blocks have been issued in response to complaints about videos posted to YouTube that were held to be derogatory toward the founder, Ataturk. The blocks have lasted about 72 hours.

After learning from the court of the offending videos, YouTube has apparently removed them, and the service has been subsequently restored. YouTube has been perfectly accessible on the connections I’ve had in Istanbul and Ankara in the past few days.

During this trip, I’ve been hosted by the Internet Association here, known as TBD, and others who have helped to set up meetings with many people — in industry, in government, in journalism, and in academia — who are puzzling over this issue. The challenges of this new law, 5651, are plain:

– The law gives very broad authority to filter the net. It places this power in a single authority, as well as in the courts. It is unclear how broadly the law will be implemented. If the authority is well-meaning, as it seems to me to be, the effect of the law may be minimal; if that perspective changes, the effect of the law could be dramatic.

– The blocks are (so far) done at the domain level, it would appear. In other words, instead of blocking a single URL, the blocks affect entire domains. Many other states take this approach, probably for cost or efficiency reasons. Many states in the Middle East/North Africa have blocked entire blogging services at different times, for instance.

– The system in place requires Internet services to register themselves with the Turkish authorities in order to get word of the offending URLs. This requirement is not something that many multinational companies are going to be able or willing to do, for cost and jurisdictional issues. Instead of a notice-and-takedown regimes for these out-of-state players, there’s a system of shutting down the service and restoring it only after the offending content has been filtered out.

* * *

The Internet – especially in its current phase of development – is making possible innovation and creativity in terms of content. Today, simple technology platforms like weblogs, social networks, and video-sharing sites are enabling individuals to have greater voice in their societies. These technologies are also giving rise to the creation of new art forms, like the remix and the mash-up of code and content. Many of those who are making use of this ability to create and share new digital works are young people – those born in a digital era, with access to high-speed networks and blessed with terrific computing skills, called “digital natives” – but many digital creators are grown-ups, even professionals.

Turkey is not alone in how it is facing this challenge. The threat of “too much” free expression online is leading to more Internet censorship in more places around the world than ever before. When we started studying Internet censorship five years ago, along with our colleagues in the OpenNet Initiative (from the Universities of Toronto, Cambridge, and Oxford, as well as Harvard Law School), there were a few places – like China and Saudi Arabia – where the Internet was censored.

Since then, there’s been a sharp rise in online censorship, and its close cousin, surveillance. About three dozen countries in the world restrict access to Internet content in one way or another. Most famously, in China, the government runs the largest censorship regime in the world, blocking access to political, social, and cultural critique from its citizens. So do Iran, Uzbekistan, and others in their regions. The states that filter the Internet most extensively are primarily in East Asia, the Middle East and North Africa, and Central Asia.

* * *

Turkey’s choice couldn’t be clearer. Does one choose to embrace the innovation and creativity that the Internet brings with it, albeit along with some risk of people doing and saying harmful things? Or does one start down the road of banning entire zones of the Internet, whether online Web sites or new technologies like peer-to-peer services or live videoblogging?

In Turkey, the Internet has been largely free to date from government controls. Free expression and innovation have found homes online, in ways that benefit culture and the economy.

But there are signs that this freedom may be nearing its end in Turkey, through 5651 and how it is implemented. These changes come just as the benefits to be reaped are growing. When the state chooses to ban entire services for the many because of the acts of the few, the threat to innovation and creativity is high. Those states that have erected extensive censorship and surveillance regimes online have found them hard to implement with any degree of accuracy and fairness. And, more costly, the chilling effect on citizens who rely on the digital world for their livelihood and key aspects of their culture – in fact, the ability to remake their own cultural objects, the notion of semiotic democracy – is a high price to pay for control.

The impact of the choice Turkey makes in the months to come will be felt over decades and generations. Turkey’s choice also has international ramifications. If Turkey decides to clamp down on Internet activity, it will be lending aid to those who seek to see the Internet chopped into a series of local networks – the China Wide Web, the Iran Wide Web, and so forth – rather than continuing to build a truly World Wide Web.