Whether or not your electronic life is your own to share or not fuels debate over the propriety of U.S. government trolling of phone and Internet sources. But the face you present to the world—literally, your face—generates new questions about privacy as various local government-created databases are surveyed with facial-recognition software.
That’s the takeaway message of a piece in today’s Washington Post, which reviews the legal and legislative terrain of law enforcement using all those driver’s license photos to track down (for now, at any rate) bad guys.
With a whiff of the heavy breathing de riguer in these pieces, the authors write:
But law enforcement use of such facial searches is blurring the traditional boundaries between criminal and non-criminal databases, putting images of people never arrested in what amount to perpetual digital lineups. The most advanced systems allow police to run searches from laptop computers in their patrol cars and offer access to the FBI and other federal authorities.
The Post reports that 37 states use facial-recognition on their driver’s license and public ID databases to prevent fraud, and of those states at least 26 allow law enforcement to search photo treasuries. A handful of big-population states—California, Ohio, Michigan, and Virginia—are not on any system at present.
Add in other sources of mugshots, whether public (arrest records, passports) and private (Facebook and other social media, like dating sites) and there’s a pretty comprehensive collection of citizens’ faces tagged with their identities for the authorities to peruse.
These privacy concerns didn’t arise because some reporters started asking questions. These concerns were front and center two years ago in at least one set of experiments sponsored by the National Science Foundation and the U.S. military that used publicly available pictures, the cloud, and off-the-shelf software to both identify people on the fly and then predict their sensitive information—via a smartphone.
As that study’s authors, led by Carnegie Mellon University’s Alessandro Acquisti (who made waves four years ago with his work predicting people’s Social Security numbers), wrote in presenting their work:
Our study is less about face recognition and more about privacy concerns raised by the convergence of various technologies. There is no obvious answer and solution to the privacy concerns raised by widely available face recognition and identified (or identifiable) facial images. Google's Eric Schmidt observed that, in the future, young individuals may be entitled to change their names to disown youthful improprieties. It is much harder, however, to change someone's face. Other than adapting to a world where every stranger in the street could predict quite accurately sensitive information about you (such as your SSN, but also your credit score, or sexual orientation), we need to think about policy solutions that can balance the benefits and risks of peer-based face recognition. Self-regulation, or opt-in mechanisms, are not going to work, since the results we presented are based on publicly available information.
One of the big names in facial recognition, PittPatt—short for Pittsburgh Pattern Recognition—was developed at CMU, spun out as a start-up by robotocist Henry Schneiderman, and then gobbled up by the Googleplex two years ago. Google has said it doesn’t plan to do anything skeevy with facial recognition (unlike Buzz, of course), and asked others not to either, but once Pandora’s box is open....
Leaving civil liberties aside—the slippery slope argument on this is genuinely chilling—the simultaneous rise in your mug out there in electronic form and machines that can recognize it offers a benefit in at least one arena: identifying the dead.
That’s one of the hopes of Acquisti’s peers at Carnegie-Mellon University’s Center for Human Rights Science, who would love to combine their school’s proven expertise in developing facial recognition with the hope of identifying those killed in Syria’s civil war. I wrote last week about CMU teaming up with the Human Rights Data Analysis Group to give a better accounting of the dead, in Syria and in future war zones, by using advanced algorithms on data collected on the ground.
Statistician Stephen Fienberg expressed hope that faces flickering in cyberspace—he was thinking specifically of social media, which is already using a lot of the appropriate technology—might be used to identify, or confirm the identity, of bodies found in conflict areas, whether the mean streets of Aleppo—or Racine.
“All of that technology is potentially of use here,” Fienberg said. “While our first pass at things will be to only work with quantitative data in the HRDAG files, I’m actually hoping that once we’ve gone through a couple of rounds with them and focused on some specific issues that they’re very concerned with, that we might actually, moving forward, try to exploit the images.”
HRDAG’s director of research, Megan Price, says it’s not in her group’s wheelhouse, at least not yet, “but as a scientist and a researcher it’s something I get excited about too.”