Yesterday we posted part one of an interview with Torin Monahan, co-author of SuperVision: An Introduction to the Surveillance Society, on the NSA data mining scandal. Monahan described some of his research into data sharing among intelligence and law enforcement, and argued that we are seeing a weakening not only of the laws designed to prevent abuse of private data, but a cultural change in favor of surveillance.
In part two, Monahan, a professor of communications at the University of North Carolina-Chapel Hill, speaks to the private sector side of the scandal. How is it that some information stays private and some doesn’t? Why?
This interview has been slightly condensed for length and edited for clarity.
I’d like to ask about the Silicon Valley companies named in the scandal. Lots of people have enormous incentives to dig digital dirt. Yet we don’t generally see digital profiles becoming public, though the companies implicated in the NSA scandal have that information. You don’t hear that some public figure just downloaded Fifty Shades of Grey, or has an Internet porn habit or a bad credit rating.
I don’t know that that’s quite true. I think the kinds of scandals that the private sector has been involved in have to do with breaches in their own data security applications, releasing all kinds of confidential customer information because of inappropriately secure data.
If you’re given the choice between convenience against the promise of some kind of anonymity that is vague and uncertain, then most people are going to choose convenience.
They get hacked.
Well, sometimes they get hacked. Sometimes they leave laptops in the wrong place. And also there has been a lot of corporate espionage, and corporations spying on their own. I’m thinking about the Hewlett Packard case from a few years back, hiring private investigators to spy on board members, and digging through phone records. And you think about the phone records scandal in the U.K. too. There have been a lot of privacy breaches that have made news but maybe not of the dirty laundry sort that one might expect from political campaigns.
Still, if you think of a Lee Atwater type. I’m bringing this up because it suggests the private companies are hewing to some sort of privacy standard. They have a ton of dirt on a lot of people.
My read on that would be that the motivations of those private companies are often different, and possibly at odds with the government security apparatus. If Instagram or Facebook or these other sites were openly sharing your information without any legal mandate to do so, then that could negatively affect their customer base and their brand image and other things that companies care deeply about, because profits are tied to it.
They’re perfectly OK with sharing information, and they do so constantly. But they need some kind of alibi to do so. They need a scapegoat like “the government made us do it” or “we did our best to anonymize your data and someone hacked us, and it wasn’t quite as anonymous as we thought it to be.”
So it could just be business models and the different operating norms of those organizations.
Whether it’s private or public data, the NSA scandal seems likely to provoke a regulatory battle. To alter the program, you’d need Congress to recognize the growing impossibility of remaining anonymous, and regulate digital data within an inch of its life.
There are a couple of issues here. One has to do with the degrees of transparency we have in society. For the most part we have asymmetrical transparency, where the major organizations, whether government or industry, are relatively opaque, and their practices are relatively opaque, and therefore not very accountable. And that’s true with these NSA programs too. You discover this has been going on for some time and we didn’t know about it. It took a leak because we don’t have transparency on how these organizations that govern our lives are behaving.
On the other hand we have almost total transparency when it comes to individual behaviors and actions, and beliefs even. So that’s an issue here. Not simply “technology’s advancing too quickly and we can’t do anything about it.” But “what kinds of arrangements do we want to have in place, in which we can have some invisibility.” And, maybe, organizations have to be more transparent and accountable. Regardless of what the practices are.
How does that transparency work in practice?
If you look at other countries you see that they do things differently than the United States. That you can have programs that you have to opt-in in order to share your information. Instead of having that be the default. It’s not destroying business. It’s not destroying companies. They find innovative ways to comply with those regulations. So we can do the same thing here and I think those are the kinds of questions we need to be asking.
Is there a technical solution, rather than a legal or political one, for enhancing privacy?
Technically it is possible to develop systems—and there are a number of good ones out there—that embody privacy-enhancing technologies.
But for me the interesting part of the articulation is the legitimacy: “If they have a legitimate reason for listening in, they could.” Well, that is interpretable. What is legitimate? How are they going to know something is legitimate when they hear it, when they see it? How are they going to prove it or substantiate it in court?
So it’s the issue of legitimacy that is at the heart of the current debate here. Do you have a legitimate reason to listen in to people’s phone calls or look into the patterns of their travel? I suspect that most people are feeling that those activities are illegitimate. And that’s why there’s so much controversy right now over the NSA programs.
So tools like Tor, for example, aren’t the solution?
The technological community has been pretty good about developing applications to try to mask some of the personal or private activities of users. But it is a spy versus spy dynamic, where as soon as one application is developed, something else is invented to circumvent it. That postpones the conversation about what we want our information systems to look like and what kind of governance we want to have.
If someone were recording this conversation, and a system was in place that flagged it for review, sent it up a chain of command to a court or an officer who evaluated it and pressed “delete,” what is the problem there? At some point we either need to trust a national security system, or get rid of it, right?
A few responses. Because of the relative opacity, we may not ever know what implications we are being subjected to. Maybe I’m being singled out for enhanced screening much more than the next person. In the absence of any evidence, I don’t know that it’s because of this conversation, or it’s because of something else.
The second issue has to do with how all these data are converging and how they can be fused together, not just by government systems but by private systems. You can imagine the argument “I have nothing to hide if I’m not doing anything wrong.” But we’re all doing something wrong at some points in our lives from the perspective of someone. And that could be your employer who doesn’t like your political leanings or doesn’t like your lifestyle. That could be your insurance company that feels you live in an environmentally dangerous neighborhood.
But in theory they don’t have access to this.
What I think is revealed by this NSA program is the easy translation and information exchange across domains. Homeland Security fusion centers are collecting information from data aggregators who are tracking our credit card purchases and our driving records and everything else, and that information exchange is becoming a two-way street, where you can have companies like Intel or Boeing or hotels even, or the owners of utilities that could say those are critical infrastructures. So now you’re having an exchange of data that’s going from the law enforcement community to the private sector, because it’s deemed to be pertinent information for covering their own risks.
We saw this with Occupy Wall Street, where there was a real synergy between the banking industry and the police. They were exchanging information and it was a two-way street. We have precedents that were established. These programs aren’t going to remain siloed. Instead, they represent a very easy flow of personal data across those silos.
What’s the solution to this? To create obstacles to the information flow, or create a kind of oversight of the flows that’s reliable?
There are a number of solutions. One could entail what some of my colleagues have called maintaining the “contextual integrity” of our data. That data collected in one place for one purpose shouldn’t be transportable to other places for other purposes without the consent of the person involved. There are schemes to do this. It could be a data exchange scheme where you could sell your data or give other people rights to your data for marketing or other purposes.
Other possibilities are to obfuscate our data. Make it less specific, more crude, so that it’s less revealing of all our activities. We can dumb it down.
What’s an example of that?
One example could be marketing. Instead of linking your frequent shopper card to your credit card to your online searches and bundling all those together to create a profile of you as a specific individual, what if some of those were de-linked or they couldn’t identify your demographics specifically? It could be “someone who got a high school or a college education,” but it wouldn’t be “you attended this college and you got these grades and here was your major.” To say it’s OK to traffic in crude categories but not fine-grain data unless you have public safety reasons to do so.
Wouldn’t that just be defeated by the tendency among consumers to share everything? Facebook asks me where I went to college every few days, and most people tell them. Amazon’s business model is based on granular data, and they actually do recommend books I end up liking, so I’m inclined to let them learn more about me.
Absolutely. This is one reason I don’t put much faith in the various technological solutions. If you’re given the choice between convenience against the promise of some kind of anonymity that is vague and uncertain, then most people are going to choose convenience.
But the other thing is that people are often coerced into disclosure. Take Facebook. The social cost of not being a part of social media could actually be so detrimental that it’s no longer viable to stay unplugged, or use a smartphone or any of the other gadgets in our lives. So there’s a coercive element. And that’s completely intentional, of course, because it’s great for these platforms.
You won’t get a job if you’re not on LinkedIn.
I wouldn’t say that, as someone who has a job and is not on LinkedIn.
When workplace surveillance happens, when people’s keystrokes are being monitored or their telephone calls are being listened to, if they’re unaware that that’s happening, and they find out, they experience a great sense of violation. They’re angry about it. If there’s full disclosure, “we are monitoring your phone calls and we’re tracking your keystrokes, and you should just know that this is happening,” people tend to be OK with it, because they can self regulate and they don’t feel they’re being trapped.
So there’s a psychological aspect to this and we see it in the NSA scandal as well. That this was completely hidden. It was secretive. It was without public approval.
Isn’t a certain amount of secrecy necessary in the national security realm? If people know they’re being monitored, and they’re planning something nefarious, they change their behavior.
I’ve heard that argument, and, in this case, I don’t think it holds much water. What are the alternative forms of communication that people are going to use? A carrier pigeon? You’re going to not use a phone, not use the Internet?
Meet in a café.
Sure. And there are clever ways to try to evade scrutiny. People communicating in online games is one the intelligence community is really worried about. It looks like you’re playing a game, but really you’re using it as a communications platform.
On the other hand, this is a wholesale collection of all of our data that can be analyzed at a later point for any kinds of connection that one is curious about. And that’s a different matter entirely from a targeted investigation because there’s reasonable suspicion that someone is involved in criminal or terrorist activity. That’s the legal threshold. We have the federal regulations to back that up.
What these programs are doing is effectively circumventing that legal restriction, and then massaging it after the fact to say “well, we collected the data but we’re not really going to act on it unless we think we have to.”
I don’t think that’s legal.