DDoS Report, in the Wake of Wikileaks, Cablegate, and Anonymous

The Wikileaks/Cablegate story has long-term implications for global society on very many levels.  (See JZ’s excellent FAQ on Wikileaks, co-developed with Molly Sauter.)  One is our shared understanding of the Distributed Denial of Service (DDoS) attack phenomenon.  The incidence of DDoS has been growing in recent years.  It links up to important threads to emerge from our OpenNet Initiative work in studying the ways in which states and others exert measures of control on the open Internet.  (Consider, for instance, the reports from ONI on Belarus and Kyrgyz election monintoring, which broke new ground on DDoS a few years ago, led primarily by our ONI partners Rafal Rohozinski, Ron Deibert, and their respective teams).

We are issuing a new report on DDoS today, which we hope will help to put some of these issues into perspective.  For an excellent blog entry on it, please see my co-author Ethan Zuckerman’s post.

After initial publication of State Department cables, Wikileaks reported that their web site became subject to a series of DDoS attacks that threatened to bring it down.  These attacks are simple in concept: multiple computers from around the world request access to the target website in sufficient numbers to make the site “crash.”  It turns out to be hard for most systems administrators to defend against such an attack.  And it turns out to be relatively easy to launch such an attack.  Computers that have been compromised, through the spread of computer viruses, are available for “rent” in order to launch such attacks. In a study that we are releasing this morning, we found instances where the “rent” of these machines is suggested by the round numbers of attacking machines and the precise durations of the attacks.

In the face of these attacks, Wikileaks decided to move its web site to safer ground.  Large-scale web hosts, particularly “cloud computing” service providers, can resist DDoS attacks.  Wikileaks did what one might reasonably suggest to, say, a small human rights organization in an authoritarian regime, where they fear attack from the state or others.  Wikileaks moved to the Amazon.com cloud.  Shortly thereafter, apparently in the face of pressure, Amazon decided to stop serving Wikileaks’ web site, and cut them off.  Wikileaks found a “James Bond-style” bunker in Sweden which agreed to host them — presumably despite pressure to take the site down.

The DDoS story took another major turn in the Wikileaks narrative when Anonymous launched a series of attacks on sites perceived to have been unhelpful to Wikileaks in the post-Cablegate aftermath.  These DDoS attacks raised the specter of cyberwarfare, much discussed in policy circles but all of a sudden on the front page of major newspapers.  Depending on political viewpoint and other factors, people I’ve talked to seemed to see these retribution DDoS attacks as different in their implications from the initial DDoS attacks on Wikileaks itself.

There have been relatively few studies of DDoS as an empirical or a policy matter.  We are releasing a report today, (which I’ve co-authored with Hal Roberts, Ethan Zuckerman, Jillian York, and Ryan McGrady), that describes DDoS and makes a series of recommendations in light of what we’ve found.  It’s funded by a generous grant from OSI.  Regardless of whether you consider DDoS to be criminal behavior, the next wave in cyberwarfare, an acceptable form of protest, or all of the above, we hope you’ll read and give feedback on the report.

Turkey at the Edge

The people of Turkey are facing a stark choice: will they continue to have a mostly free and open Internet, or will they join the two dozen states around the world that filter the content that their citizens see?

Over the past two days, I’ve been here in Turkey to talk about our new book (written by the whole OpenNet Initiative team), called Access Denied. The book describes the growth of Internet filtering around the world, from only about 2 states in 2002 to more than 2 dozen in 2007. I’ve been welcomed by many serious, smart people in Ankara and Istanbul, Turkey, who are grappling with this issue, and to whom I’ve handed over a copy of the new book — the first copies I’ve had my hands on.

This question for Turkey runs deep, it seems, from what I’m hearing. As it has been described to me, the state is on the knife’s edge, between one world and another, just as Istanbul sits, on the Bosporus, at the juncture between “East and West.”

Our maps of state-mandated Internet filtering on the ONI site describe Turkey’s situation graphically. The majority of those states that filter the net extensively lie to its east and south; its neighbors in Europe filter the Internet, though much more selectively (Nazi paraphernalia in Germany and France, e.g., and child pornography in northern Europe; in the U.S., we certainly filter at the PC level in schools and libraries, though not on a state-mandated basis at the level of publicly-accessible ISPs). It’s not that there are no Internet restrictions in the states in Europe and North America, nor that these places necessarily have it completely right (we don’t). It’s both the process for removing harmful material, the technical approach that keeps the content from viewers (or stops publishers from posting it), and the scale of information blockages that differs. We’ll learn a lot from how things turn out here in Turkey in the months to come.

An open Internet brings with it many wonderful things: access to knowledge, more voices telling more stories from more places, new avenues for free expression and association, global connections between cultures, and massive gains in productivity and innovation. The web 2.0 era, with more people using participatory media, brings with it yet more of these positive things.

Widespread use of the Internet also gives rise to challenging content along with its democratic and economic gains. As Turkey looks ahead toward the day when they join the European Union once and for all, one of the many policy questions on the national agenda is whether and how to filter the Internet. There is sensitivity around content of various sorts: criticism of the republic’s founder, Mustafa Kemal Atatürk; gambling; and obscenity top the list. The parliament passed a law earlier in 2007 that gives a government authority a broad mandate to filter content of this sort from the Internet. To date, I’m told, about 10 orders have been issued by this authority, and an additional 40 orders by a court to filter content. The process is only a few months old; much remains to be learned about how this law, known as “5651,” will be implemented over time.

The most high-profile filtering has been of the popular video-sharing site, YouTube. Twice in the past few months, the authority has sent word to the 73 or so Turkish ISPs to block access, at the domain level, to all of YouTube. These blocks have been issued in response to complaints about videos posted to YouTube that were held to be derogatory toward the founder, Ataturk. The blocks have lasted about 72 hours.

After learning from the court of the offending videos, YouTube has apparently removed them, and the service has been subsequently restored. YouTube has been perfectly accessible on the connections I’ve had in Istanbul and Ankara in the past few days.

During this trip, I’ve been hosted by the Internet Association here, known as TBD, and others who have helped to set up meetings with many people — in industry, in government, in journalism, and in academia — who are puzzling over this issue. The challenges of this new law, 5651, are plain:

– The law gives very broad authority to filter the net. It places this power in a single authority, as well as in the courts. It is unclear how broadly the law will be implemented. If the authority is well-meaning, as it seems to me to be, the effect of the law may be minimal; if that perspective changes, the effect of the law could be dramatic.

– The blocks are (so far) done at the domain level, it would appear. In other words, instead of blocking a single URL, the blocks affect entire domains. Many other states take this approach, probably for cost or efficiency reasons. Many states in the Middle East/North Africa have blocked entire blogging services at different times, for instance.

– The system in place requires Internet services to register themselves with the Turkish authorities in order to get word of the offending URLs. This requirement is not something that many multinational companies are going to be able or willing to do, for cost and jurisdictional issues. Instead of a notice-and-takedown regimes for these out-of-state players, there’s a system of shutting down the service and restoring it only after the offending content has been filtered out.

* * *

The Internet – especially in its current phase of development – is making possible innovation and creativity in terms of content. Today, simple technology platforms like weblogs, social networks, and video-sharing sites are enabling individuals to have greater voice in their societies. These technologies are also giving rise to the creation of new art forms, like the remix and the mash-up of code and content. Many of those who are making use of this ability to create and share new digital works are young people – those born in a digital era, with access to high-speed networks and blessed with terrific computing skills, called “digital natives” – but many digital creators are grown-ups, even professionals.

Turkey is not alone in how it is facing this challenge. The threat of “too much” free expression online is leading to more Internet censorship in more places around the world than ever before. When we started studying Internet censorship five years ago, along with our colleagues in the OpenNet Initiative (from the Universities of Toronto, Cambridge, and Oxford, as well as Harvard Law School), there were a few places – like China and Saudi Arabia – where the Internet was censored.

Since then, there’s been a sharp rise in online censorship, and its close cousin, surveillance. About three dozen countries in the world restrict access to Internet content in one way or another. Most famously, in China, the government runs the largest censorship regime in the world, blocking access to political, social, and cultural critique from its citizens. So do Iran, Uzbekistan, and others in their regions. The states that filter the Internet most extensively are primarily in East Asia, the Middle East and North Africa, and Central Asia.

* * *

Turkey’s choice couldn’t be clearer. Does one choose to embrace the innovation and creativity that the Internet brings with it, albeit along with some risk of people doing and saying harmful things? Or does one start down the road of banning entire zones of the Internet, whether online Web sites or new technologies like peer-to-peer services or live videoblogging?

In Turkey, the Internet has been largely free to date from government controls. Free expression and innovation have found homes online, in ways that benefit culture and the economy.

But there are signs that this freedom may be nearing its end in Turkey, through 5651 and how it is implemented. These changes come just as the benefits to be reaped are growing. When the state chooses to ban entire services for the many because of the acts of the few, the threat to innovation and creativity is high. Those states that have erected extensive censorship and surveillance regimes online have found them hard to implement with any degree of accuracy and fairness. And, more costly, the chilling effect on citizens who rely on the digital world for their livelihood and key aspects of their culture – in fact, the ability to remake their own cultural objects, the notion of semiotic democracy – is a high price to pay for control.

The impact of the choice Turkey makes in the months to come will be felt over decades and generations. Turkey’s choice also has international ramifications. If Turkey decides to clamp down on Internet activity, it will be lending aid to those who seek to see the Internet chopped into a series of local networks – the China Wide Web, the Iran Wide Web, and so forth – rather than continuing to build a truly World Wide Web.

OpenNet Initiative on What Really Happened in Burma

Over the last few weeks, we’ve all witnessed the extraordinary bravery of protesters in Burma (or Myanmar, depending on whom you ask) and the great lengths to which the military junta has been willing to go to keep the world from knowing much about what was going on there. Many reported the story of how the junta “shut off” the Internet before they carried out some of the worst acts in the process of suppressing the demonstration. The ONI is today releasing a careful technical review that describes what in fact the military junta did, set in context of the demonstrations and the state’s history of Internet filtering. Stephanie Wang led the writing, and Shishir Nagaraja conducted the technical analysis. It’s the first time, with the exception of Nepal in 2005, that a state has sought to shut off access to the Internet altogether. The story of what they did, how, and when is fascinating, and upsetting, reading for anyone with an interest in the relationship between the Internet & democracy or the burgeoning citizens’ media movement.

Yahoo!, the Shi Tao Case, and the Benefit of the Doubt

Rep. Tom Lantos has called on Yahoo! executives to return to Congress to talk about what they knew and when in the Shi Tao case. Rep. Lantos alleges that Yahoo!’s general counsel misled a hearing (at which I and others submitted testimony, too) in 2006 by indicating that the company knew less than it actually did about why the Chinese state police were asking for information about Shi, a dissident and journalist. Yahoo! did turn over the information; the Chinese prosecuted Shi; he remains in jail; and the issue continues to point to the single hardest thing about our US tech companies doing business in places that practice online censorship and surveillance. The case has led to Congressional hearings, proposed legislation, shareholder motions, and lawsuits against Yahoo!

(For much more on the general topic of Internet filtering and surveillance, see the OpenNet Initiative’s web site, a consortium of four universities of which we are a part: Cambridge, Harvard Law School, Oxford, and Toronto.)

The hard problem at the core of this issue is that police come to technology companies every day to ask for information about their users. It is a fair point for technology companies to make that they often cannot know much about the reason for the policeman’s inquiry. It could be completely legitimate: an effort to prevent a crime from happening or bringing a criminal to justice. In the United States, these requests come in the context of the rule of law, including a formal reliance on due process. And every once in a while, a technology company pushes back on requests for data of this sort, publicly or privately. The process is imperfect, if you consider it from a privacy standpoint, but it works — a balance is found between the civil liberties of the individual and the legitimate needs of law enforcement to keep us safe and to uphold the rules to which we all agree as citizens.

This hard problem is much harder in the context of, say, China. It’s not the only example, but it’s the example here with Shi Tao. In Yahoo!’s testimony in 2006, Michael Callahan, the executive vice president and general counsel, said that Yahoo! did not know the reasons for the Chinese state police’s request for information about Shi.

You can read the testimony for yourself here. The relevant statement by Mr. Callahan is:

“The Shi Tao case raises profound and troubling questions about basic human rights. Nevertheless, it is important to lay out the facts. When Yahoo! China in Beijing was required to provide information about the user, who we later learned was Shi Tao, we had no information about the nature of the investigation. Indeed, we were unaware of the particular facts surrounding the case until the news story emerged.” (Emphasis mine.)

The key phrase: “No information about the nature of the investigation.” Not that the information was inconclusive, or vague, or hard to translate, or possibly of concern. “No information.”

Now, we are told, there’s a big disagreement about whether that testimony was accurate.

Rep. Lantos, in a statement yesterday, claims that Callahan misled the committee. Lantos writes: “”Our committee has established that Yahoo! provided false information to Congress in early 2006. … We want to clarify how that happened, and to hold the company to account for its actions both before and after its testimony proved untrue. And we want to examine what steps the company has taken since then to protect the privacy rights of its users in China.” Rep. Chris Smith (R-NJ) says it more harshly: “Last year, in sworn testimony before my subcommittee, a Yahoo! official testified that the company knew nothing ‘about the nature of the investigation’ into Shi Tao, a pro-democracy activist who is now serving ten years on trumped up charges. We have now learned there is much more to the story than Yahoo let on, and a Chinese government document that Yahoo had in their possession at the time of the hearing left little doubt of the government’s intentions. … U.S. companies must hold the line and not work hand in glove with the secret police.”

Yahoo! responded with its own statement, pasted here in full:

“Yahoo! Statement on Foreign Relations Committee Hearing Announcement
October 16, 2007

“The House Foreign Affairs Committee’s decision to single out Yahoo! and accuse the company of making misstatements is grossly unfair and mischaracterizes the nature and intent of our past testimony.

“As the Committee well knows from repeated meetings and conversations, Yahoo! representatives were truthful with the Committee. This issue revolves around a genuine disagreement with the Committee over the information provided.”

“We had hoped that we could work with the Committee to have an open and constructive dialogue about the complicated nature of doing business in China.”

“All businesses interacting with China face difficult questions of how to best balance the democratizing forces of open commerce and free expression with the very real challenges of operating in countries that restrict access to information. This challenge is particularly acute for technology and communication companies such as Yahoo!.”
“As we have made clear to Chairman Lantos and the Committee on Foreign Affairs, Yahoo! has treated these issues with the gravity and attention they demand. We are engaged in a multi-stakeholder process with other companies and the human rights community to develop a global code of conduct for operating in countries around the world, including China. We are also actively engaged with the Department of State to assist and encourage the government’s efforts to deal with these issues on a diplomatic level.”

“We believe the answers to these broad and complex questions require a constructive dialogue with all stakeholders engaged in a collaborative manner. It is our hope that the Committee will approach the hearing in that same constructive spirit.”

I can understand why Yahoo! is claiming that they are being treated unfairly. Yahoo! has been the company that has been most tarred, in some ways, for a problem that is industry-wide, and should be resolved on an industry-wide (or broader, such as law or international law) basis. Yahoo! has been a very constructive player in the ongoing effort to come up with a code of conduct for companies in this position (along with Google, Microsoft, and others). And Yahoo! has been working hard to establish internal practices to head off similar situations and voicing its concern about Chinese policies in this arena. Their efforts since the Shi Tao case on this front have been laudable.

But if in fact the company knew more — even a little bit more — about why the Chinese police came knocking for Shi Tao than what Mr. Callahan led all of us to believe, (“no information”), then it is a big problem. Unless there are facts that I’m missing, for the Congress to call Yahoo! back to Capitol Hill to correct the record, in public, is completely appropriate, if “no information” is not what we were meant to understand. It may well be that what the company knew was in fact so vague, as many legal terms are in China, as to be inclusive. It may well be that someone in the company knew, but the right people didn’t know — and that an internal process was flawed in this case. But those are very different discussions, ones we should have, than the straight-up problem that the company didn’t have context for the request.

Because I respect many of the people working hard on this issue within Yahoo!, and credit that Jerry Yang is very well-meaning on this topic, I’ve been willing to give Yahoo! a big benefit of the doubt. After all, a key part of our own legal system — as part of a rule of law that we’ve come to trust here — calls on us to do so. The big problem here for me is if we’ve in fact been misled, all of us, to believe that it was one problem when it really was quite another. If “no information” proves to be inaccurate, I’m not sure how much longer I can keep extending that benefit of the doubt in this case.

(The Merc’s Frank Davies wrote up the story here, among a few hundred others in the last 24 hours. Rebecca MacKinnon, of course, had the story months before (also here) and said already much what I’ve said here.)

WaPo on the Myanmar Internet Crackdown

Roby Alampay nails some of the key issues related to Internet governance and international law in an editorial today in the Washington Post. It’s well worth a read, especially if you’ve been following the Myanmar crackdown. Alampay also makes a key link: the issue of Internet access should be perceived to be a human rights issue, and one which those thinking about Internet governance ought to take up.

In relevant part: “States have come far in such discussions and in reaching some levels of consensus. International standards have greater impetus, evidently, when they seek to cap that which they perceive as threatening to the civilized world: child pornography, organized crime, terrorism, and SPAM. This much is understandable.

“What the international community has barely begun to discuss, however, is the other side of the dilemma: What should be the international standard on ensuring Internet accessibility and openness?

“The more compelling Internet story last week took place as far away from Europe as one can get. It was from Burma — via defiant blogs, emails, and phone-cam videos posted online — that the world witnessed the other argument: that when it comes to the Internet (and all forms of media, for that matter) ‘standards’ is a legitimate topic not only with respect to limiting the medium’s (and its users’) potential harm, but more importantly in setting and keeping the medium (and its users) free.”

Internet Filtering Session at the SDP 2007

This morning — at the Summer Doctoral Program in Cambridge, MA — we’re taking up the topic of Internet filtering and the work of the ONI (and what we’ve written about in our forthcoming book from MIT Press, called Access Denied). Some of the questions that students raised about the topic and after reading our work on it:

– One student says that her dad read a copy of Dr. Zhivago, censored at the time in his country, where each page was accessible to him only as a photograph. One of her points, I think, is that history repeats itself and we should understand how this story is a repeat and where it is new and different than previous stories of censorship. One student suggests, as a follow-up: let’s test the hypothesis that the Internet is revolutionary. A second of her points, I take it, is that people will figure ways around censorship in clever ways.

– How do you measure filtering of the Internet and then analyze what you’ve learned in a way that informs decision-making?

– How do you measure the impact of filtering on access to knowledge?

– Do we need to have ISPs that act like common carrier who do not ever filter?

– What is the role of large countries as neighbors to smaller countries, raised by the possibility of in-stream filtering?

– What is the role of the commercial filtering providers?

– How can we determine whether the practice of Internet filtering violates a universal right to access information?

– How can we study how copyright and trademark owners carry out filtering?

– Is there legitimate filtering? (A student posits: there is legitimate filtering, including via search engine. This concept invokes what Urs Gasser blogged about, provocatively, at the ONI conference about “best practices in Internet filtering.”)

– How do we study the circumvention piece and include it in our story? What about developing the tools of circumvention?

– How do you overlay cultural differences on this survey?

– To what extent does control of communications facilitate control of other institutions, tools, or otherwise? To what extent is control of communications a priority for a given authority?

– When does one state have the right and/or ability to influence what another state does in this domain?

See Daithi and Ismael for more, better than what I’ve posted here.

Berkman Books

The faculty and fellows of the Berkman Center will publish four books this year. Two of them are out already: David Weinberger’s Everything is Miscellaneous and John Clippinger’s A Crowd of One. In celebration of this high-water mark for the team, we’ve put together a new page on the Berkman web site called Berkman Books, which features most of the relevant books written by Berkman faculty and fellows since our founding nearly 10 years ago. We’ll keep it updated as new ones come online, such as the ONI‘s Access Denied (on Internet filtering) and Prof. Jonathan Zittrain’s The Future of the Internet — and How to Stop It, both due out later this year.