Tim Armstrong, former Berkman fellow and now a prof at the U of C, writes: “… the permanence of networked information has costs, too, which (like the benefits) are only beginning to be explored. Members of the generation just behind mine, who have grown up reflexively creating and posting information online, are learning that digital is forever — if you’re a job applicant (or even a camp counselor), anything that has ever been written by (or about) you online is, at least potentially, still there. (Back in my day, we used goofy aliases to hide our online identities; but I gather that practice has been fading.) Once information is online, it turns out, it may becomes quite hard ever to get it back offline again — the Wayback Machine preserves old web pages; Google Groups archives Usenet posts; and it’s only a matter of time before somebody comes up with the magic bullet that automatically archives IRC and IM conversations and makes them searchable. Even your deleted e-mails aren’t necessarily gone; they may still exist on backup tapes where law enforcement authorities can get them. The durability of digital content raises problems that touch on both informational security and individual privacy.”
A call to action: the security infrastructure for RSS is not where it needs to be for the mainstreaming of this technology to work and to be adequately protective of user privacy.
I was resetting my Bloglines account this morning, adding some new feeds, taking out some that I don’t read, and so forth. I searched on a friend’s web moniker (“Whirlycott”) to find whatever feeds he might be offering. Up popped a feed related to a web-based invoicing service he uses entitled (“[His Name] Invoices”) to which I could subscribe in Bloglines. I am not sure what it would have rendered — I did not subscribe! — but I thought it worth mentioning to him. It turns out he has been mad about this privacy problem for months. His initial post, worth reading and reviving as an issue of public discussion, is here.
I credit the fact that this may not be (just) a “Bloglines issue” but rather an “RSS industry” issue. But it’s a real problem if we are to continue to express ourselves via these citizen-generated media tools that offer RSS feeds, and moreso if we move into the promising realm of using RSS feeds to support other productivity-type tools. The privacy problems that already exist in cyberspace are enough to tackle; we need to get in front of an RSS privacy problem before it grows into yet widespread issue. After this morning’s experience, it’s clear to me it’s already a problem.
(Following the thread a bit, there’s another post in the series, including, some months ago, a note from someone appearing to be with Bloglines saying that they know it’s a serious problem. How can we fix it, gang? If it’s not a Bloglines-only issue and it’s a community issue, what has to get done?)
The combination of our conference this week on digital identity, JZ’s paper and forthcoming book on Generativity and his OII inaugural lecture, this morning’s WSJ, and all manner of other things has convinced me that we need a new framework for thinking about privacy and security in the digital world.
On a plane this morning from SFO-PDX, I read found (at least) three articles that made this problem plain to me, again. One was the piece on the Consumer Privacy Legislative Forum’s day on the Hill yesterday (see the CDT et al. statement), in the context of which Meg Whitman of eBay and Nicole Wong of Google and others made the case for laying “a foundation for a long-term approach to privacy protection” (Whitman, as quoted in the WSJ). Wong wrote, correctly in my view, that “this matrix of [privacy/security] laws is complex, incomplete and sometimes contradictory.” She went on to say: “On an Internet beset with spyware, malware, phishing, identity-theft, and other privacy threats, enforcement of privacy protections has become an industry-wide challenge.” The WSJ story on MySpace and its advertiser relationships — in the wake of a $30 million lawsuit against the company related to online safety of a user — made the same point, implicitly. A nice Web2.0 story on Boston-based Tabblo didn’t have to make the point that anyone can post online photos about anyone, mash them up into a collage, and publish — to anyone else, and everyone else.
The creative opportunities of the web have never been more wonderful and should be embraced. But the privacy and security stakes are rising as we bring our digital identities come online, more and more, and as our digital native children start to experience the good and the bad of this brave new world. What’s the role of schools, and universities, and parents, and kids, and companies, and governments? As the wisdom of the crowd is relied upon to make more and more decisions, what’s the due process when your privacy and security is at stake, if things go wrong? JZ has some good ideas, and so do others. We need to get on with the planning and the building of this foundation, and fast.
(If you’re having trouble grasping the digital ID part of this equation, zip over to ZDNet, where David Berlind does his usual amazingly lucid job of putting it all in context in his review of the Higgins Trust Framework — and n.b. the “spectrum” that he describes, which is right on. Berlind writes: “By the end of the panel, I was visualizing a spectrum of attitudes about technological expression of identity that range from the very negative to the very positive. On one end are the warning signs about what could happen if the right checks, balances, and governance aren’t in place. On the other end is hope. Hope that idenitity could be tapped in a fashion that serves the greater social good.”)