The recording of Brad Smith, VP and general counsel of Microsoft, at Harvard Law School is posted here. His topic was the intersection of innovation, interoperability, and intellectual property.
Ira Rubinstein is here with us at the Berkman Center today to talk about Microsoft’s corporate policies on privacy. Ira was joined yesterday here by Brad Smith, Microsoft’s General Counsel, who spoke last night on the topic of innovation, interoperability and IP, and Annmarie Levin, like Ira an Associate General Counsel and with whom we’ve been working on interop and innovation.
Ira’s lunch talk is on the company’s privacy guidelines, which have been posted online, in a 49-page document, since last October. Ira’s testimony to a US Senate committee on privacy in 2001 is also posted here.
As his slides and the policy document states, the core commitment is that “Microsoft customers will be empowered to control the collection, use, and distribution of their personal information.” This commitment drives through to a set of detailed definitions, and then to guidelines for privacy protections when developing software.
Microsoft has gone to a “layered” approach to privacy statements. There’s a basic document with a lot of links to privacy statements by type of application or topical area. One discussion topic: can a layered approach result in greater disclosure and clarity to users?
Microsoft has stated its support for comprehensive privacy legislation in the United States. My comment, not Ira’s: as an idea for comprehensive privacy legislation: what about a format regulation promulgated by the US FTC that ensures that consumers can know where to look for information about how personal information is handled?
The nature of what kind of personally identifiable information that the policies need to cover is changing as the company continues to grow and add business lines. Microsoft announced six months or so ago a new initiative into the health care domain, covering electronic medical records and so forth. All of a sudden, the type of information that Microsoft might collect about you has changed (expanded) radically.
Much of the conversation, prompted by JZ and Ben Adida, revolved around a lawyer’s problem: what happens after a subpoena arrives seeking personally identifiable information. Ira: “I agree that Data minimization is a desirable goal” from a privacy perspective. The hard question buried here is the role of technology intermediaries in retaining information that might help law enforcement v. protecting the privacy of customers.
Should Microsoft, and other companies wishing to be leaders in the security space, let people be idiots? With the “Stop Phishing Filter,” Microsoft gives you a series of choices: set the phishing filter to automatic, set it to manual, or ask me later — but not “no thanks” for this phishing filter. Is “no thanks” a choice they should offer, even if that’s a very poor choice for a user to make?
JZ is the semi-formal respondent: He keys in initially to the notion of making affirmative choices to design privacy protection into software. JZ wants an interface where a consumer could check in on the conversations going on in the background as clients connect back to servers. Or a periodic audit, where you’re prompted to go back in to check periodically on all the pinging that’s gone back and forth. He’s also keyed in on the possibilities for government surveillance in a world of software-as-service instead of products.