Quite a Saturday morning at StopBadware

This morning, it seems that many (all?) Google search results led to a warning page meant to be associated with sites that have malware on them.  We at StopBadware are partners with Google, among others, working hard to fight malicious code together.  Our role, as researchers, is to help set the criteria for what constitutes a site with Badware; we keep a public, online clearinghouse of sites that may harm one’s computer; and we run a review process to get sites off that list when they are clean.  There have been a series of blog posts about this strange, short occurence this morning which include misinformation about what happened on the Google side. 

What happened? Google’s VP Marissa Meyer wrote: “Very simply, human error. Google flags search results with the message ‘This site may harm your computer’ if the site is known to install malicious software in the background or otherwise surreptitiously. We do this to protect our users against visiting sites that could harm their computers. We maintain a list of such sites through both manual and automated methods. We work with a non-profit called StopBadware.org to come up with criteria for maintaining this list, and to provide simple processes for webmasters to remove their site from the list.“We periodically update that list and released one such update to the site this morning. Unfortunately (and here’s the human error), the URL of ‘/’ was mistakenly checked in as a value to the file and ‘/’ expands to all URLs. Fortunately, our on-call site reliability team found the problem quickly and reverted the file. Since we push these updates in a staggered and rolling fashion, the errors began appearing between 6:27 a.m. and 6:40 a.m. and began disappearing between 7:10 and 7:25 a.m., so the duration of the problem for any particular user was approximately 40 minutes.”

Nothing like it has happened in the first three years or so of the StopBadware project’s existence.  A few minutes after this large number of warnings appeared, the StopBadware server crashed under the load of people looking for more information about what had taken place.  Everything seems back to normal now. 


Here is the official Google statement about what happened, from which the quote above is pulled.  (Changes from the original post appear in blue in the Google post.)

Apple Gets it Right After StopBadware et al. Send Warning

StopBadware and the rest of the Net community trying to keep the environment clean of bad code scored a good win this week in the public interest.  The StopBadware team and others were all over a software update from Apple that operated as badware, offering new software installations disguised as product updates.  StopBadware blogged about our review process, saying we were looking into it; prepared a report declaring them as badware; sent the draft report to Apple for review (as we do for all targets before public release); and lo-and-behold, Apple fixed the problem and issued an updated version.  Well done to Max Weinstein and the whole SBW team and others out there keeping companies honest.  If only it ordinarily worked this way…

Sears and Badware

Tonight, we at StopBadware are releasing a report that finds that Sears Holding Corporation’s MySHC Community application is badware. (We also blogged our pending review of the application a few days ago.) Our concerns are these:

1) The software does not fully, accurately, clearly, and conspicuously disclose the principal and significant features and functionality of the application prior to installation.

The My SHC Community application’s only mention of the software’s functionality outside of the privacy policy and user license agreement (ULA) prior to installation is in a sentence of the fourth paragraph of a six paragraph introduction to the community. It states that “this research software will confidentially track your online browsing.” It does not make clear outside the privacy policy and ULA that this includes sending extensive personal data to Sears (see below) or that it monitors all internet traffic, not just browsing.

2) Information is collected and transmitted without disclosure in the privacy policy.

There are two privacy policies available to users of My SHC Community and the accompanying software application. All of the behaviors noted in this report are disclosed in one version, which is shown to and accepted by users during installation. However, when viewing the privacy policy on the website or from the link included in a registration confirmation e-mail, a different version of the privacy policy, which does not include any information about the software or its behavior, appears, unless the user is currently logged into the My SHC Community site. This means, for example, that a user checking the privacy policy from a different PC may not see the privacy policy that s/he originally agreed to.

3) The software does not clearly identify itself.

While running, the My SHC Community application gives no indication to the user that it is active. It is also difficult to tell that the application is installed, as there are no Start menu or desktop shortcuts or other icons to indicate its presence.

4) The software transmits data to unknown parties.

According to SHC and comScore, the parent company of the software developer, VoiceFive, the My SHC Community application collects and transmits to Sears Holdings’s servers (hosted by comScore) extensive data, including websites visited, e-mails sent and received (headers only, not the text of the messages), items purchased, and other records of one’s internet use. This is not made clear to the user separate from the privacy policy or ULA, as required by StopBadware guidelines. Sears Holdings Corp. commits in its privacy policy “to make commercially viable efforts to automatically filter confidential personally identifiable information,” but is unable to guarantee that none of this information will be sent or stored.

We’ve spent time on the phone with the team at Sears Holding Corporation (SHC) about their app. SHC has informed StopBadware that they are significantly improving the My SHC Community application disclosure and privacy policy language and adding a Start menu icon in an effort to comply with our guidelines and address privacy concerns. They expect these changes to be implemented within 48 hours. At StopBadware, we have not evaluated these planned changes at this time. SHC has also informed us that they have suspended invitations to new users to install the application until these changes are implemented.

Our news release on this report is here.

Cookie Crumbles Contest: Make a Video, Help Consumers, Win Cash

Have fun and help raise awareness about how the Internet really works — and possibly earn a trip to DC and $5000 if you’re really good at it!

The Berkman Center, StopBadware, Google, Medium, and EDVentures present Cookie Crumbles. It’s a fun contest for people who like to make short, humorous (yet meaningful) videos and posting them to YouTube (there’s a Cookie Crumbles group set up for contest purposes). We are looking for short YouTube videos that address these questions as accurately and as creatively as possible:

Most people know cookies as a treat best enjoyed with milk. When it comes to web cookies, however, many users want to know more:

* What is a cookie?
* How do cookies work?
* How can cookies be used?
* How is the data from cookies used with data collected in other ways, including from third parties?
* How can cookies be misused?
* What options does a user have to manage cookies and their use?

The top few submissions, as determined by a combination of YouTube viewers and Berkman Center staff, will earn their creators a trip to Washington, D.C., where their videos will be aired and discussed at the United States Federal Trade Commission’s November 1-2 Town Hall workshop entitled “Ehavioral Advertising: Tracking, Targeting, and Technology.” Several prizes will be awarded by a panel of judges and discussants including Jeff Chester, Esther Dyson (who blogged the contest here and here), and others, moderated by the Berkman Center, and including one grand prize of $5,000. Submission guidelines and more can be found here.

Steve Gibson at the Anti-Spyware Coalition

We have the great honor of hosting the ASC‘s third big public meeting here at the Harvard Law School. We’re grateful to Ari Schwartz and Ross Schulman for bringing the meeting to our campus. We’re proudly a member of ASC through our StopBadware project, which has grown into one of the biggest and most interesting projects at the Berkman Center for Internet & Society.

Steve Gibson, the podcaster of Security Now! and InfoWorld columnist and computer developer and many other important things, is giving the keynote right now. Steve is recounting his personal experience in discovering spyware creeping onto the network and onto his PC, and leading to him coining the term “spyware.” He says his PC is his temple. He recalls having been “immediately pissed off” when PKZip for Windows brought “the first bit of nastiness” to his PC by trying to phone home. Steve says that that current story is the “Tyranny Of The Default” — default settings that are still not safe.  His stories evoke much the same picture that Jonathan Zittrain paints in his article, The Generative Internet, and his forthcoming book, The Future of the Internet — and How to Stop It.

Three Conversations on Intellectual Property: Fordham, University of St. Gallen, UOC (Catalunya)

Three recent conversations I’ve been part of offered a contrast in styles and views on intellectual property rights across the Atlantic. First, the Fordham International IP conference, which Prof. Hugh Hanson puts on each year (in New York, NY, USA); the terrific classes in Law and Economics of Intellectual Property that Prof. Urs Gasser teaches at our partner institution, the University of St. Gallen (in St. Gallen, Switzerland); and finally, today, the Third Congress on Internet, Law & Politics held by the Open University of Catalonia (in Barcelona, Spain), hosted by Raquel Xalabarder and her colleagues.

* * *

Fordham (1)

At Fordham, Jane Ginsburg of Columbia Law School moderated one of the panels. We were asked to talk about the future of copyright. One of the futures that she posited might come into being — and for which Fred von Lohmann and I were supposed to argue — was an increasingly consumer-oriented copyright regime, perhaps even one that is maximally consumer-focused.

– For starters, I am not sure that “consumer” maximalization is the way to think about it. The point is that it’s the group that used to be called the consumers who are now not just consumers but also creators. It’s the maximization of the rights of all creators, including re-creators, in addition to consumers (those who benefit, I suppose, from experiencing what is in the “public domain”). This case for a new, digitally-inspired balance has been made best by Prof. Lessig in Free Culture and by many others.

– What are the problems with what one might consider a maximalized consumer focus? The interesting and hardest part has to do with moral rights. Prof. Ginsburg is right: this is a very hard problem. I think that’s where the rub comes.

– The panel agreed on one thing: a fight over compulsory licensing is certainly coming. Most argued that the digital world, particularly a Web 2.0 digital world, will lead us toward some form of collective, non-exclusive licensing solution — if not a compulsory licensing scheme — will emerge over time.

– “Copyright will be a part of social policy. We will move away from seeing copyright as a form of property,” says Tilman Luder, head of copyright at the directorate general for internal markets at the competition division of the European Commission. At least, he says, that’s the trend in copyright policy in Europe.

* * *

Fordham (2)

I was also on the panel entitled “Unauthorized Use of Works on the Web: What Can be Done? What Should be Done?”

– The first point is that “unauthorized use of works” doesn’t seem quite the relevant frame. There are lots of unauthorized uses of works on the web that are perfectly lawful and present no issue at all: use of works not subject to copyright, re-use where an exception applies (fair use, implied license, the TEACH Act, e.g.s), and so forth. These uses are relevant to the discussion still, though: these are the types of uses that are

– In the narrower frame of unauthorized uses, I think there are a lot of things that can be done.

– The first and most important is to work toward a more accountable Internet. People who today are violating copyright and undermining the ability of creators to make a living off of their creative works need to change. Some of this might well be done in schools, through copyright-related education. The idea should be to put young people in the position of being a creator, so they can see the tensions involved: being the re-user of some works of others, and being the creator of new works, which others may in turn use.

– A second thing is continued work on licensing schemes. Creative Commons is extraordinary. We should invest more in it, build extensions to it, and support those who are extending it on a global level (including in Catalunya!).

– A third thing, along the lines of what Pat Aufderheide and Peter Jaszi are doing with filmmakers, is to establish best practices for industries that rely on ideas like fair use.

– A fourth thing is to consider giving more definition to the unarticulated rights — not the exclusive rights of authors that we well understand, but the rights of those who would re-use them, to exceptions and limitations.

– A fifth area, and likely the discussion that will dominate this panel, is to consider the role of intermediaries. This is a big issue, if not the key issue, in most issues that crop up across the Internet. Joel Reidenberg of Fordham Law School has written a great deal on this cluster of issues of control and liability and responsibility. The CDA Section 230 in the defamation context raises this issue as well. The question of course arose in the Napster, Aimster, and Grokster contexts. Don Verrilli and Alex Macgillivray argued this topic in the YouTube/Viacom context — the topic on which sparks most dramatically flew. They fought over whether Google was offering the “claim your content” technology to all comers or just to those with whom Google has deals (Verilli argued the latter, Macgillivray the former) and whether an intermediary could really know, in many instances, whether a work is subject to copyright without being told by the creators (Verilli said that wasn’t the issue in this case, Macgillivray says it’s exactly the issue, and you can’t tell in so many cases that DMCA 512 compliance should be the end of the story).

* * *

St. Gallen

Across the Atlantic, Prof. Dr. Urs Gasser and his teaching and research teams at the University of St. Gallen are having a parallel conversation. Urs is teaching a course on the Law and Economics of Intellectual Property to graduate students in law at St. Gallen. He kindly invited me to come teach with him and his colleague Prof. Dr. Bead Schmid last week.

– The copyright discussion took up many of the same topics that the Fordham panelists and audience members were struggling with. The classroom in Switzerland seemed to split between those who took a straight market-based view of the topics generally and those who came at it from a free culture perspective.

– I took away from this all-day class a sense that there’s quite a different set of experiences among Swiss graduate students , as compared to US graduate students, related to user-generated content and the creation of digital identity. The examples I used in a presentation of what Digital Natives mean for copyright looking ahead — Facebook, MySpace, LiveJournal, Flickr, YouTube, and so forth — didn’t particularly resonate. I should have expected this outcome, given the fact that these are not just US-based services, but also in English.

– The conversation focused instead on how to address the problem of copyright on the Internet looking forward. The group had read Benkler, Posner and Shavell in addition to a group of European writers on digital law and culture. One hard problem buried in the conversation: how much help can the traditional Law and Economics approach help in analyzing what to do with respect to copyright from a policy perspective? Generally, the group seeemed to believe that Law and Economics could help a great deal, on some levels, though 1) the different drivers that are pushing Internet-based creativity — other than straight economic gains — and 2) the extent to which peer-production prompts benefits in terms of innovation make it tricky to put together an Excel spreadsheet to analyze costs and benefits of a given regulation. I left that room thinking that a Word document might be more likely to work, with inputs from the spreadsheet.

* * *


The UOC is hosting its third Congres Internet i Politica: Noves Perspectives in Barcelona today. JZ is the keynoter, giving the latest version of The Future of the Internet — and How to Stop It. The speech just keeps getting better and better as the corresponding book nears publication. He’s worked in more from StopBadware and the OpenNet Initiative and a new slide on the pattern of Generativity near the end. If you haven’t heard the presentation in a while, you’ll be wowed anew when you do.

– Jordi Bosch, the Secretary-General of the Information Society of Catalonia, calls for respect for two systems: full copyright and open systems that build upon copyright.

Prof. Lilian Edwards of the University of Southhampton spoke on the ISP liability panel, along with Raquel Xalabarder and Miquel Peguera. Prof. Edwards talked about an empirical research project on the formerly-called BT Cleanfeed project. BT implements the IWF’s list of sites to be blocked, in her words a blacklist without a set appeals process. According to Prof. Edwards’ slides, the UK government “have made it plain that if all UK ISPs do not adopt ‘Cleanfeed’ by end 2007 then legislation will mandate it.” (She cites to Hansard, June 2006 and Gower Report.) She points to the problem that there’s no debate about the widespread implementation of this blacklist and no particular accountability for what’s on this blacklist and how it is implemented.

– Prof. Edwards’ story has big implications for not just copyright, but also the StopBadware (regarding block lists and how to run a fair and transparent appeals process) and ONI (regarding Internet filtering and how it works) research projects we’re working on. Prof. Edwards’ conclusion, though, was upbeat: the ISPs she’s interviewed had a clear sense of corporate social responsibility, which might map to helping to keep the Internet broadly open.

For much better coverage than mine, including photographs, scoot over to ICTology.

StopBadware, CDT Complaint to US FTC

Today, we at StopBadware, along with our friends at the Center for Democracy and Technology, are filing our first complaint to the FTC about a badware application, called FastMP3Search Plugin.

As Christina Olson put it on the SBW blog, we are highlighting “FastMP3Search.com.ar for distributing badware to unsupecting Internet users. FastMP3Search.com.ar is a site that offers MP3s for download— however, it requires users to download a plugin in order to download these songs. … This FastMP3Search Plugin (reviewed by StopBadware here) is one of the worst applications that StopBadware has ever seen. Not only does it secretly install additional software, but the software it installs includes adware, Trojan horses, and a browser hijacker—and these applications download even more applications in turn. What’s more, FastMP3Search disables Windows Firewall without the user’s permission, thereby allowing it to download all these malicious applications without Windows alerting the user to their badness. These applications then change the user’s homepage, pop-up numerous advertisements (mostly for rogue anti-spyware applications), and hog system resources, which caused our test computer to slow down and randomly freeze.”

The complaint to the FTC is here. The report on FastMP3Seach.com.ar is here.

The big issues in this case are two:

1) FastMP3Search.com.ar’s application includes many of the worst attributes of badware, all in one inconvenient bundle. It’s a parade of horribles. Among other things, the application can disable your firewall on your PC without letting you know, in addition to giving you all manner of pop-ups, a trojan horse, and so forth.

2) This matter highlights the challenge of fighting bad applications that are (presumably) hosted and developed in places far from where the impact is felt, in some cases. So, in this instance, we couldn’t find the developers of this bad application to tell them, as we endeavor to do in advance, that we were issuing a negative report about them. Their site is registered under the Argentinian country code, but there’s no particular reason to believe that the purveyors of the application actually reside there. The impact of the application is felt in many jurisdictions outside of Argentina, or wherever the home of the purveyors may be. The US FTC, and its counterparts around the world, have an extremely tough job when it comes to such an application. The FTC deserves a lot of credit for its work to combat badware, including recent actions to shut down some of the applications that CDT and StopBadware and others have complained about. The FTC also has done terrific cross-border work in the spam and online fraud contexts.

We hope that by highlighting this application and by bringing this complaint, we can both raise consumer awareness about this bad application and encourage the FTC to take action against those who seek to profit from it. We are particularly grateful to our partners at CDT, including Ari Schwartz and his team, as well as the Berkman Center’s clinical program, led by Phil Malone, which helped in preparing the complaint.

Good companies sometimes release bad applications

A few days ago, at StopBadware.org, we released a report on AOL 9.0, the free software on offer from one of the giants of the Internet industry.

The back-story on this matter is that we wrestled hard with the right way to release this report. We followed our research process rigorously, following tips and leads from dozens of users who submitted reports to us via StopBadware.org about AOL 9.0, and found that the application didn’t meet our guidelines on multiple fronts. (And yes, we have tested the apps of other big, mainstream tech companies; we are not just “picking on” AOL.) We tested AOL 9.0 many, many times; we shared the draft with a number of trusted advisors and with AOL itself; and we are confident that the results of our testing are accurate. But we also didn’t want to mislead users into thinking that AOL is malicious, when we plainly think they are not.

As I’ve said in every interview I’ve done on this topic, AOL does not belong in the company of the most malicious of spyware and malware providers. No question about it, AOL has been a leader for the past several years in working to fight spyware, whether through its involvement in the Anti-Spyware Coalition that Ari Schwartz of CDT runs or any number of other initiatives overseen by Jules Polonetsky. On his blog, AOL Vice-Chairman Ted Leonsis, the senior executive who has been with the company the longest, wrote, “No company on the Internet has done more to protect users from the dangers of spyware and adware.” That strong statement may or may not be true, but it is certainly the case that AOL has been on the side of the angels in this matter in many ways and on many occasions. It’s important that the nuance is captured, by putting this report in a newly-created category of “open inquiries” on our reports page, rather than issuing a final statement, especially while the company is working to improve the application and says it intends to meet the standards set in the guidelines we’ve published. I admire many people who work at AOL, including one of my oldest friends, from high school. And it’s essential that we make clear that AOL has stepped up to the plate to make changes, many of which they say are already in the works, destined for a new release next month.

Even good companies can release bad applications. Our concern related to AOL 9.0 is primarily about disclosure. The report lists our specific concerns, which I won’t repeat here.

Set aside AOL and our “open inquiry” for a moment, and consider the problem in a broader, abstract construct. If an ordinary computer user goes to a website and decides to accept the offer of a free software download:

1) Does the user have a good chance of knowing — more or less — what will happen to their computer when she clicks “I agree”?

2) Will the user know what’s running in the background after that download, and where she got it from?

3) And once the user decides she no longer wishes to have these services running on their computer, will she be able to get them completely off the computer?

What I wanted to recount here is not our process before issuing the report, but rather just my personal experience trying this application at home — just one user’s view, setting aside all the guidelines and formality of StopBadware. If you doubt our findings, I urge you to try it.

The day before we issued our report, (to be clear, the real testing was in a pristine testing lab environment, many times over), I went home and turned on an ordinary computer. It’s a few years old, a Dell, quite nice when I bought it and generally in great shape, but not exactly humming along on the latest dual-core processors. It is on a fast broadband connection, wired, from Comcast, in the Boston area. I get a good throughput on it.

I went to aol.com and I found the free application, available for download from this page. (On the same page, you are also offered a version that comes with access services, for $9.95 per month, which I did not test.) Then you arrive at this page. You are asked to put personal information, nothing too revealing, into a form. But nowhere on this page can you access what AOL is going to do with your personal information — such as a privacy policy — nor a statement of what you’ll get installed on your computer if you do the download. (Update: if you hit the page from outside, rather than from within the sign-up process, I see that they now have a link to the privacy policy in the footer of this page. The privacy policy link seems curiously still absent if you are within the process — you have to try it, but I have a screencapture — taken after a cleared cache and so forth.)

OK, so, I make the leap of faith and I enter in my (correct) personal information, including name, address, phone, e-mail, and birthday. I come to another page asking me to choose a screen name. I choose the screen-name I had when I got my first private, commercial e-mail address, which was in fact the same one, with AOL. It was still available. Then you get another page, asking me to agree to the Terms of Service, and, also incorporated by reference, consent to the Privacy Policy. Are you forced to scroll through either of them before you click? Nope. Are you told “look in here to find out exactly what you’re downloading”? Nope.

(Pause here for a few other notes, of interest probably only to lawyers. One line in the relevant AOL privacy policy is the ominous statement, a stand-alone paragraph: “Your AOL Member information may be supplemented with additional information, including publicly-available information and information from other companies.” Good to know, but does this mean Choicepoint, or something else? What will my info be supplemented with? How does that relate to all the mail AOL has sent over the years? But one wonders also whether the user has in fact affirmed their consent, as a legal matter, by this means of “agreeing” to the Terms of Service and the Privacy Policy. Consider the line of shrinkwrap, browsewrap and clickwrap cases, including the venerable ProCD, but also Specht v. Netscape Communications Corp., 150 F. Supp. 2d 585 (S.D.N.Y. 2001) and Rudder v. Microsoft, 1999 Carswell Ont. 3195 (Ont. Super. Ct.). A quick, though a bit dated, overview of the cases appears here. AOL surely knows all about this, given the Williams case (Williams v. America Online, Inc. 2001 Mass. Super. LEXIS 11, 43 U.C.C. Rep. Serv. 2d (Callaghan) 1101 (Mass. Super. Ct. Feb. 8, 2001)), in which a court found that there were issues related to whether users had in fact assented. I’m not positive, but there’s certainly a possibility that another judge might say that the user did not actually assent by virtue of this form of establishing “agreement,” since the user is not required to scroll through or otherwise clearly presented with all the relevant terms, other than via mutiple hypertext links. In any event, while simple for users, this process of assent is probably not a best-practice for an interface to ensure that the user knows what they’re getting in for, especially novice Internet users. Maybe no issue here, I suppose, but the caselaw doesn’t seem to answer my question fully. I expect AOL has had wonderful counsel on this score, and that it’s been fully vetted, but I guess I’m still not sure from my own analysis and reading of the caselaw. Some clever e-commerce lawyers, like Ronald Mann and Jane Winn, who wrote the casebook on this topic, might well have some insights here.)

So, lawyerly musings about the intricacies of Remote Contracting aside, I consent by typing in the captcha letters. Then, you get to the screen where they offer you the download itself — one, big bundle, apparently. The sign-up is super-easy, but I’m none the wiser, unless I followed an intricate series of links and tabs, about what’s about to happen to my computer. Even if you do follow all the threads, as we found, you have to get into the Privacy Policy to find some of the apps to download — and even then, we couldn’t find a list of everything that we eventually downloaded. (Perhaps AOL is right and in fact users tend to look to a Privacy Policy to find out what apps are in the bundle, if they do in fact look for such information; that just doesn’t happen to square with my own instincts, but they no doubt have more data on this score than I do.)

I set to downloading the application. It took a while, despite the speed of the connection and the relative power of the computer — perhaps a sure sign that lots was going on. During this time, my screen filled with various statements about security software and so forth that I was getting, and noting that for-pay upgrades would be available to make the services better. At no point did I have the chance to see a full list of what was arriving onto my computer, nor a chance to “uncheck” the boxes so as to say that, no, I didn’t actually need more than one new media player, for instance. The process took maybe 15 minutes or so. After a reboot, I checked out what had happened.

AOL gave me a lot of stuff. This would not come as a surprise to anyone who has downloaded an application suite from AOL before, I suppose. And no doubt other leading Internet firms do the same thing. Several icons appeared on my desktop and in the tray along the bottom of my Windows 98 (yeah, I know, I said it was old) screen. A new search bar appeared in a second layer of the tray along the bottom, branded clearly as AOL. As soon as I tried to go online, I found myself back in 1998 — in AOL’s garden. The experience wasn’t terrible, to be sure — nothing malicious that I could find, to be sure — but not for me. I admit: I’m not likely AOL’s target customer anymore, even if I was in the 1990s. I decide I want to uninstall the whole thing.

I go to add/remove programs, because I know to do that. I suppose most users at this point do, thanks to the computing industry’s standardization around this method, at least in the Windows environment. The process of getting rid of the applications, even the ones that do uninstall, was for me exactly as described here. Let’s just say it took forever. A much longer time than it took to get it installed, by a wide margin.

All in all, let’s assume AOL fixed the pop-up that didn’t have an “x” to close it (floating for days on our test machine, vaguely offering some form of upgrade related to connectivity services) and the .exes that didn’t fully uninstall (seems to have been done, and AOL says it has, and that they were never doing anything bad while they were there) and so forth, as we outline in our report. Let’s assume also that the disclosure is improved.

Would it then add up to Badware, if all of these programs were disclosed and the user could go through and take them all off? Nah. But still pretty annoying? You bet. And is the average user likely to go all the way through this process of informing themselves and then uninstalling all these programs, loads of reboots, etc.? Honestly, I don’t think so. But let’s be clear: this is not just an AOL problem — it’s instead an industry issue, one related to bundling of applications. Do users really want this level of simplicity? Maybe. But maybe users deserve more credit: maybe users really do want to take the easy route OR to be able to install a subset of those applications. Maybe it’s possible within AOL 9.0, but I sure couldn’t find it.

What I’ve been so surprised at, both before and since releasing the report, is what other people have said to us. My e-mail box has filled up with reports of people saying, “I’ve been waiting for someone to say this” or telling stories about how they’ve had similar experiences and have felt powerless to do anything about it. It’s not hard to hear what users are saying about AOL 9.0. Read what people are saying in the comments fields, say, of the many blogs, Slashdot postings, etc. who have covered this story. One user: “What that org. says about AOL is true. AOL 9.0 puts so much extra crap on your computer, doesn’t tell you about it, then tries to say it’s a vital part of the AOL program.” Another user told us, before we released the application: “I re-installed the newest software for AOL and it just keeps coming on and on whether I want it to or not. … I’ll NEVER put AOL on again! Warn people, this is something new.” The user comments, submitted to us directly or to the web before and after this report, tell a pretty clear story: at least some meaningful subset of users are not happy with what they’re getting.

Eric von Hippel is here at the Berkman Center today. He’s amazing — a professor at MIT’s Sloan School and champion of Democratizing Innovation. For the past three decades, he’s been talking about user-centric innovation. The Internet community is packed with people seeking to tell their story back to companies that offer services online. Sometimes users are cranks, for sure. But sometimes they speak very clearly and loudly and with their feet — and much of the time, as von Hippel and others have proved, a subset of these users are in fact the innovators. (This is a big Dave Winer theme, too.) One argument goes: AOL users are not the innovators. But I don’t believe this, not for a second. There are almost 20 million users, and no doubt these users have had a lot to say to AOL over time that has made its way into the many fine applications AOL has developed and offered as part of its services. This is an era of user-centered innovation, not just in Web 2.0, but in many many fields, as von Hippel has shown. Users of AOL 9.0’s free version are doing a whole lot of free reviewing out there and telling a story of their experiences across the web, some of which we’ve echoed on StopBadware.org. Eric von Hippel’s insight strikes me as relevant not just to AOL, but to all those offering bundles of applications for free downloads. Users have a lot to say, and some of it might help get to innovation, if the conversation is kept open. Put another way, instead of trying to make it more and more simple but also more and more closed, could AOL and others similarly situated instead make its application more “hackable”?

Latest badware report

Our project that names names of code that users should look out for, StopBadware, has released its latest instance of an application that falls outside of the scope of our guidelines. We think computer users should beware of the Jessica Simpson Screensaver (in case the name didn’t give it away to you already).

The idea of this project, in my view — yes, I am biased! — is a good one: we ask computer users to tell us about bad code they come across, or at least code they’re worried about. We put the info into a clearinghouse. Then, our team of researchers tests out the code. Where there’s smoke, we look closely to see if there’s fire. We work with a leading group of advisors and technology working group members to stay on the straight and narrow. And then we publish, periodically, what we come up with. The model comes from Jonathan Zittrain’s paper, The Generative Internet (required reading for those in our space, if you have yet to do so; forthcoming in the Harvard Law Review).

Over time, we hope that computer users will come to check with us before downloading an application, at least to know a bit better what they’re getting into (and downloading anyway, if they are not troubled by our findings). And, as in several cases so far in our first few months of operations, we hope that those of whom we write about will make adjustments, on their own, of their code according to our recommendations. We hope the net effect will be a better, safer, net and computer users who come to trust, with reason, the code that they decide to run on their PCs.

* * *

Today, back briefly in Cambridge, MA, for a working session on our Digital Learning project, funded kindly by the Mellon Foundation. We’ll have a white paper coming out of this project in coming months.