We’re living in “a very interesting data-driven time,” according to Microsoft General Manager for Trustworthy Computing Governance Peter Cullen.

Microsoft played host in Washington, DC on March 5 to a discussion of privacy models and governing frameworks, featuring Cullen, FTC Commissioner Julie Brill, American Civil Liberties Union (ACLU) Counsel Chris Calabrese, and Professor Fred Cate from the Indiana University Maurer School of Law.

When the Organization for Economic Co-operation and Development (OECD) first developed their privacy principles, email addresses did not yet exist. Cullen expressed that, in a world where more data will be created than collected, data protection will see an “evolvement,” but he warned against letting the perfect be the enemy of the good in launching new privacy frameworks, including in comprehensive legislation.

Cate pointed out that the OECD principles focused on “something of a legal fiction,” a data transaction that never existed in reality and still doesn’t. Now, data is collected with real-world sensors and created by individuals and disparate collectives at a whim. But while most people may acknowledge that the privacy model of notice and choice may not be sufficient to protect consumer privacy, and privacy policies by themselves cannot protect that privacy, agreeing on the norms we need is no easy task. Cate denigrated the “notice system” as a bureaucratic approach to privacy, referencing the HIPAA patient privacy notices as a “pass through where we claim that these are being read, but we know they’re not.” Cate declared that it is “time to stop saying gotcha to the individual,” and instead say, “as a principle of accountability and stewardship,” we need to retain the structure of the OECD privacy principles, but “move away from the fictional transaction,” towards a focus on the use of data.

Cate sketched out three categories of data use. The first category is the obvious, expected or assumed use of data, where notice is a waste of time, such as the need for credit card data to make a purchase. The second category is where we agree as a society that certain uses are harmful, threatening or creepy, and we “take them off the table.” The third category involves a cost-benefit analysis, and takes that analysis into account during the design process (known as “privacy by design”). This may require consent in some cases, but not in others. The biggest hurdle facing this categorization is how to assess intangible harms and how to develop a process for identifying harms.

Brill observed that “Big Data analytics challenges the OECD privacy framework in a fundamental way.” She emphasized the need to maintain privacy and consumer trust while pursuing the benefits of Big Data. “Risk mitigation” is an essential approach, she said, and the FTC hopes to help raise its importance within companies, such as by encouraging the appointment and empowerment of Chief Privacy Officers. The FTC, she said, already focuses on privacy harms, but defining it “is a critical question.” Brill thinks of “deception as a subset of harm.” However, a thornier question is “who decides the risk” and “whether we’re involved with risky use?”

She referenced as “an incredibly interesting idea” a proposal for every organization to have internal privacy advisory boards modeled after the Institutional Review Boards required by the federal Human Subject Rule for most any entity receiving federal funding. Referencing an IRB-approved study in North Carolina involving sharing of women’s mammography data, which caused a huge uproar from the patients involved, Brill warned that she would still be concerned about “leaving the consumer out of that equation,” since “the consumer needs to be a part of how that risk gets assessed.” Using data for purposes “out of context” has become a major focus of the FTC, she said, referencing such issues as retail shopper location tracking. “I think we need a lot more transparency,” Brill commented. “Consumers need to know how their data is being used” and have the tools that let them “see what is going on and give them choices.”

“That kind of sunshine will be a good disinfectant,” she concluded.

Brill was recently featured in MRA’s top 10 list of government players in consumer data privacy.

Reacting to the suggestion that moving beyond a notice and choice privacy model would mean that privacy notices go away, Cullen exclaimed that “there is no notion… that the concept of a fully transparent privacy notice goes away.” Notices “longer than Hamlet and Macbeth” will still be around and there are still scenarios where a consumer should still have to be actively engaged and consenting.

However, said Brill, dumbing down data transparency is not the way to go. She called upon privacy companies to “sell it. You have advertisers, you have marketers, you have ways to convince consumers of the benefits, explain how to handle the harms.”

Calabrese stressed the utility of privacy policies, even though everyone hates them, as “a defense of consent.” He worried about taking notice and consent out of the equation, “since consumers for good or ill, actually make a decision.” Once that decision point passes and the data gets used in a different way, such as in Big Data, consumers don’t usually revisit their decision. “Automated decision-making,” which is “at the heart of Big Data,” poses a huge threat to consumer choice.

“A standardized way that you can opt out or interface with data collection and figure out what is being collected is an incredibly useful idea.” Calabrese suggested that consumer technology advances may hold the key, since the mobile devices we all carry should be able to automatically tell the real world sensors what kind of privacy choices consumers want to make.

Perhaps the biggest area of potential disagreement in the discussion became how to determine what information is particularly sensitive. “If I’m a domestic violence victim, worried about stalking, the mailing address I give for the delivery of a new book is sensitive information.” But ultimately, Calabrese reverted to a more dogmatically negative view of the data business: “the more I know about you the more I can control your behaviors,” and that kind of anti-consumer coercive control is “enabled by data collection” in the U.S.

Cate took the opportunity to lambaste the misguided way we approach notice in the U.S. Mailing notices to each consumer about data breaches, he suggested, is much less effective than a central website with timely notification from the breached companies, since he feels that companies “currently use notice to shift responsibility.” Cate joked that notice and choice on his mobile phone for system updates amounted to: “Do you agree, or do you want this phone to be a brick. I agree, of course I agree.”

Assigning liability, he suggested, could be an important measure for privacy protection in the U.S. If the data collected by a private entity causes harm, that private entity should be liable for damages for causing the harm. This would help make data users, Cate said, internalize the considerations of how it will be used and what dangers may lie ahead. The reason to focus on a more traditional tort law approach to privacy, frowned upon by many foreign countries, is exemplified by the evolution in data collection and use. “15 years ago”, Cate said, “the big news was that the average consumer’s credit report was being updated 4 times a day,” which would have made consent cumbersome to achieve. That was why Congress passed the Fair Credit Reporting Act (FCRA). Now, “we’re closer to 4 times a nanosecond,” and “we’re mis-focusing consumer concerns” when there are “a thousand ways privacy can be compromised” beyond the laws currently being debated. Cate warned against inadvertently turning consumer privacy into a minor “speed bump.”

“Data minimization,” Brill stressed in return, will be “where the rubber hits the road” in the differences between the ideal frameworks and “those of us who deal with the data breaches and the vast collection of information.” Some people may want to move away from collection and focus on use, but she continued “to believe that collection is a very important issue.” Minimizing data “up front” allows companies to “minimize harm that can come from data breaches.” Brill felt that there is “too much information flowing.”

Calabrese applauded Brill’s focus, calling data minimization “one of the biggest protections” against government privacy violations: “If they don’t have the data, they can’t abuse it.”

In contrast, Cate suggested that data minimization is “perhaps desirable, but unachievable as a legal matter.” Consumers are willingly parting with tons of their personal information, and “we’ve been useless in stopping it.” If you “dangle 10 cents off a coffee” consumers will go along with it. On the other hand, a company may make a logical market-driven choice to minimize their data collection and retention. However, “we see new uses for data every day,” so it may take a while for companies to get on board.