Was Apple's Failure to Promptly Alert Its Customers About the FaceTime Bug Illegal?

A law professor explains New York's probe into Apple's response to the bug and the difference between privacy and security issues under consumer-protection regulations.
Author:
Publish date:
The Apple Store in Beijing, China.

The Apple store in Beijing, China. In response to a recent bug with Apple's FaceTime application, New York is opening an investigation into the company.  

New York State is investigating Apple's failure to promptly warn its customers about a FaceTime bug that allowed callers to eavesdrop on those they dialed, even if the call wasn't answered. On Wednesday, New York Governor Andrew Cuomo and state Attorney General Letitia James announced the probe, which will look into whether Apple's seemingly laggard response to the security issue violated consumer-protection laws.

The FaceTime security issue—already nicknamed FacePalm by security experts—was discovered last week by a mother and son in Arizona. If a FaceTime call was initiated and the caller added themselves to a Group FaceTime conversation, the initial call recipient's phone would immediately begin transmitting audio, even if they never picked up. According to the New York Times, the mother and son in Arizona spent a week trying in futility to alert Apple to the issue, before 9to5Mac, an Apple news site, reported on the bug on Monday, prompting the company to disable the Group FaceTime feature.

"We need a full accounting of the facts to confirm businesses are abiding by New York consumer protection laws and to help make sure this type of privacy breach does not happen again," Cuomo wrote in a statement.

To unpack what the investigation might mean for Apple and its users, as well as the legal context for the security and privacy issues at hand, Pacific Standard spoke to Justin Hurwitz, a law professor and co-director of the Space, Cyber, and Telecom Law Program at the University of Nebraska.

section-break

What laws govern consumer expectations around security, privacy, and a product not working as expected?

It's really important to start by emphasizing that this is a bug that led to a security problem. This is not—I cannot emphasize enough—a privacy violation. Unfortunately, the New York attorney general characterized this as being about privacy rights, and Senator Amy Klobuchar had a tweet to similar effect, conflating this situation with the ongoing discussions and debates about privacy and saying it demonstrates the need for federal privacy legislation. These sentiments are really dangerous and demonstrate a fundamental lack of understanding among individuals in government that makes it very difficult to have a meaningful conversation about privacy and security issues.

Can you break down the distinction between security issues and privacy issues?

Privacy is about how someone who you give access to your information handles that information. So, for example, you give your information to Facebook, and they are then authorized to have that information: Do they misuse it?

Security is about how someone who is not authorized to have your information obtains it. Did that person break into the system, or does the system not operate as it was designed to operate?

With software, privacy is about our expectations for people who handle data. Security is a more objective conversation about whether a system is designed to keep bad actors out, and whether bad actors are kept out.

Isn't there some crossover between these categories?

There are definitely overlaps. If you have a Venn diagram, there's an area where security and privacy absolutely do overlap.

So is the FaceTime issue a situation where a security issue is also a privacy issue?

There is no law governing software design and implementation that makes it illegal to make software that has bugs in it. This is an area where the law has struggled over the last 30 years or so. Ordinary products are governed by product-liability law, but the software running on devices and computers has generally not been considered a product for the purposes of the law. So software is governed by contract law instead. The terms of service that you sign or agree to when you start using any application are all going to include language that says there might be bugs in here, so use at your own risk.

Does the law have anything to say about bad software, like the FaceTime bug?

The federal and state wiretap acts are really important here. This doesn't govern how Apple designs its software per se, but it does govern how, if there are bugs, people take advantage of those bugs. If I were to use FaceTime to effectively eavesdrop or spy on you, that would be governed by state and possibly federal wiretap statutes. So, the conduct that this FaceTime bug allows is illegal, full stop. It's not illegal for Apple to have this bug, but it is illegal for someone to take advantage of this bug to spy on folks.

What about consumer-protection law, which seems to be the focus of New York's investigation?

If Apple makes material assertions about the security of its products and it turns out those products are insecure, then Apple could be engaging in deceptive activity and violating consumer-protection law that makes it illegal to make deceptive claims about products.

But there's no such thing as a software that doesn't have bugs: Any software is going to have bugs once its above a trivial level of complexity. So then the question is going to be, was Apple's response to the problem sufficient and quick enough?

And was it?

There is some legitimate concern that Apple initially ignored this issue, but then their ultimate response, I think, took it very seriously.

It seems like they weren't responsive, at least publicly, for a week. But we are living in this bizarre Twitter world where a major response to every incident is expected within five minutes. When it comes to security, we want companies to be able to be public about their problems—both from a research perspective and from a customer-service perspective. If they know that the government is going to come down hard on them when any flaw is announced if they don't have a comprehensive solution within 24 hours, then companies are going to be secretive about problems, which is the exact opposite of what we want. So, the New York attorney general's response to the situation is a really bad one in terms of improving the security of consumer devices.

So what should New York's response have been?

If the New York attorney general was serious about this and this wasn't a PR-politicking stunt on the state's part, there did not need to be a Twitter announcement about it. Ideally, you'd have a couple of attorneys in the office get on the phone with Apple, maybe send a brief subpoena to get documents looking at the internal response, and then find out if the Apple security team looked at the bug and responded appropriately.

This interview has been edited for length and clarity.

Related