Brain-Scan Lie Detectors Just Don’t Work

by Lauren Kirchner

It sounds just like something out of a sci-fi police procedural show — and not necessarily a good one.

In a darkened room, a scientist in a white lab coat attaches a web of suction cups, wires, and electrodes to a crime suspect’s head. The suspect doesn’t blink as he tells the detectives interrogating him, “I didn’t do it.”

The grizzled head detective bangs his fist on the table. “We know you did!” he yells.

The scientist checks his machine. “Either he’s telling the truth … or he’s actively suppressing his memories of the crime,” says the scientist.

Some law enforcement agencies really are using brain-scan lie detectors, and it really is possible to beat them, new research shows.

“Dammit,” says the detective, shaking his head, “this one’s good.”

But it isn’t fiction. Some law enforcement agencies really are using brain-scan lie detectors, and it really is possible to beat them, new research shows.

The polygraph, the more familiar lie detection method, works by “simultaneously recording changes in several physiological variables such as blood pressure, pulse rate, respiration, electrodermal activity,” according to a very intriguing group called the International League of Polygraph Examiners. Despite what the League (and television) might have you believe, polygraph results are generally believed to be unreliable, and are only admitted as evidence in U.S. courts in very specific circumstances.

The brain-scan “guilt detection test” is a newer technology that supposedly measures electrical activity in the brain, which would be triggered by specific memories during an interrogation. “When presented with reminders of their crime, it was previously assumed that their brain would automatically and uncontrollably recognize these details,” explains a new study published last week by psychologists at the University of Cambridge. “Using scans of the brain’s electrical activity, this recognition would be observable, recording a ‘guilty’ response.”

Law enforcement agencies in Japan and India have started to use this tool to solve crimes, and even to try suspects in court. These types of tests have not caught on with law enforcement in the U.S., though they are commercially available here. That’s probably a good thing; the researchers of this study found that “some people can intentionally and voluntarily suppress unwanted memories.”

The experiment was pretty straightforward, and the participants were no criminal masterminds. Ordinary people were asked to stage mock crimes, and then were asked to “suppress” their “crime memories,” all while having their brains scanned for electric activity. Most people could do it, the researchers found: “a significant proportion of people managed to reduce their brain’s recognition response and appear innocent.”

Not everyone could, though. “Interestingly, not everyone was able to suppress their memories of the crime well enough to beat the system,” said Dr. Michael Anderson, of the Medical Research Council Cognition and Brain Sciences Unit in Cambridge. “Clearly, more research is needed to identify why some people were much more effective than others.”

Separate studies on guilt-detection scans, conducted by cognitive neuroscientists at Stanford University, had similar findings. Anthony Wagner at Stanford’s Memory Lab had study participants take thousands of digital photos of their daily activities for several weeks. Wagner and his colleagues then showed sequences of photos to the participants, and measured their brain activity while the participants saw both familiar and unfamiliar photos.

The researchers could identify which photos were familiar to the participants and which ones were not, with 91 percent accuracy, Wagner said. However, when the researchers told the participants to try to actively suppress their recognition of the photos that were theirs — to “try to beat the system” — the researchers had much less success.

Scientists still don’t know how this “suppression” actually works; like so many questions about the inner workings of the human brain, it remains a mystery. But the fact that so many test subjects could, somehow, do it on command, led the authors of both the Cambridge and Stanford studies to come to the same conclusions.

In short, brain-scan guilt-detection type tests are beatable, their results are unreliable, and they shouldn’t be used as evidence in court. Except on television.

Next Story — Where’s the Evidence That Mass Surveillance Actually Works?
Currently Reading - Where’s the Evidence That Mass Surveillance Actually Works?

Where’s the Evidence That Mass Surveillance Actually Works?

by Lauren Kirchner

CIA Director John Brennan answers questions after delivering remarks at the Center for Strategic and International Studies, November 16, 2015. (Photo: Win McNamee/Getty Images)

Current and former government officials have been pointing to the terror attacks in Paris as justification for mass surveillance programs. Central Intelligence Agency Director John Brennan accused privacy advocates of “hand-wringing” that has made “our ability collectively internationally to find these terrorists much more challenging.” Former National Security Agency and CIA director Michael Hayden said, “In the wake of Paris, a big stack of metadata doesn’t seem to be the scariest thing in the room.”

Ultimately, it’s impossible to know just how successful sweeping surveillance has been, since much of the work is secret. But what has been disclosed so far suggests the programs have been of limited value. Here’s a round-up of what we know.

An internal review of the Bush administration’s warrantless program — called Stellarwind — found it resulted in few useful leads from 2001–04, and none after that. New York Times reporter Charlie Savage obtained the findings through a Freedom of Information Act lawsuit and published them in his new book, Power Wars: Inside Obama’s Post–9/11 Presidency:

[The FBI general counsel] defined as useful those [leads] that made a substantive contribution to identifying a terrorist, or identifying a potential confidential informant. Just 1.2 percent of them fit that category. In 2006, she conducted a comprehensive study of all the leads generated from the content basket of Stellarwind between March 2004 and January 2006 and discovered that zero of those had been useful.

In an endnote, Savage then added:

The program was generating numerous tips to the FBI about suspicious phone numbers and e-mail addresses, and it was the job of the FBI field offices to pursue those leads and scrutinize the people behind them. (The tips were so frequent and such a waste of time that the field offices reported back, in frustration, “You’re sending us garbage.”)

In 2013, the President’s Review Group on Intelligence and Communications Technologies analyzed terrorism cases from 2001 on, and determined that the NSA’s bulk collection of phone records “was not essential to preventing attacks.” According to the group’s report:

In at least 48 instances, traditional surveillance warrants obtained from the Foreign Intelligence Surveillance Court were used to obtain evidence through intercepts of phone calls and e-mails, said the researchers, whose results are in an online database.
More than half of the cases were initiated as a result of traditional investigative tools. The most common was a community or family tip to the authorities. Other methods included the use of informants, a suspicious-activity report filed by a business or community member to the FBI, or information turned up in investigations of non-terrorism cases.

Another 2014 report by the non-profit New America Foundation echoed those conclusions. It described the government claims about the success of surveillance programs in the wake of the 9/11 attacks as “overblown and even misleading.”

An in-depth analysis of 225 individuals recruited by al-Qaeda or a like-minded group or inspired by al-Qaeda’s ideology, and charged in the United States with an act of terrorism since 9/11, demonstrates that traditional investigative methods, such as the use of informants, tips from local communities, and targeted intelligence operations, provided the initial impetus for investigations in the majority of cases, while the contribution of NSA’s bulk surveillance programs to these cases was minimal.

Edward Snowden’s leaks about the scope of the NSA’s surveillance system in the summer of 2013 put government officials on the defensive. Many politicians and media outlets echoed the agency’s claim that it had successfully thwarted more than 50 terror attacks. ProPublica examined the claim and found “no evidence that the oft-cited figure is accurate.”

It’s impossible to assess the role NSA surveillance played in the 54 cases because, while the agency has provided a full list to Congress, it remains classified.

The NSA has publicly discussed four cases, and just one in which surveillance made a significant difference. That case involved a San Diego taxi driver named Basaaly Moalin, who sent $8,500 to the Somali terrorist group al-Shabab. But even the details of that case are murky. From the Washington Post:

In 2009, an FBI field intelligence group assessed that Moalin’s support for al-Shabab was not ideological. Rather, according to an FBI document provided to his defense team, Moalin probably sent money to an al-Shabab leader out of “tribal affiliation” and to “promote his own status” with tribal elders.

Also in the months after the Snowden revelations, the Justice Department said publicly that it had used warrantless wiretapping to gather evidence in a criminal case against another terrorist sympathizer, which fueled ongoing debates over the constitutionality of those methods. From the New York Times:

Prosecutors filed such a notice late Friday in the case of Jamshid Muhtorov, who was charged in Colorado in January 2012 with providing material support to the Islamic Jihad Union, a designated terrorist organization based in Uzbekistan.
Mr. Muhtorov is accused of planning to travel abroad to join the militants and has pleaded not guilty. A criminal complaint against him showed that much of the government’s case was based on intercepted e-mails and phone calls.

Local police departments have also acknowledged the limitations of mass surveillance, as Boston Police Commissioner Ed Davis did after the Boston Marathon bombings in 2013. Federal authorities had received Russian intelligence reports about bomber Tamerlan Tsarnaev, but had not shared this information with authorities in Massachusetts or Boston. During a House Homeland Security Committee hearing, Davis said:

There’s no computer that’s going to spit out a terrorist’s name. It’s the community being involved in the conversation and being appropriately open to communicating with law enforcement when something awry is identified. That really needs to happen and should be our first step.

This story originally appeared on ProPublica as “What’s the Evidence That Mass Surveillance Works? Not Much” and is re-published here under a Creative Commons license.

Next Story — What We Know About the Computer Formulas Making Decisions in Your Life
Currently Reading - What We Know About the Computer Formulas Making Decisions in Your Life

What We Know About the Computer Formulas Making Decisions in Your Life

by Lauren Kirchner

(Photo: McIek/Shutterstock)

We recently reported on a study of Uber’s dynamic pricing scheme that investigated Uber’s surge pricing patterns in Manhattan and San Francisco and showed riders how they could potentially avoid higher prices. The study’s authors finally shed some light on Uber’s “black box,” the algorithm that automatically sets prices but that is inaccessible to both drivers and riders.

That’s just one of a nearly endless number of algorithms we use every day. The formulas influence far more than your Google search results or Facebook NewsFeed. Sophisticated algorithms are now being used to make decisions in everything from criminal justice to education.

But when big data uses bad data, discrimination can result. Federal Trade Commission chairwoman Edith Ramirez recently called for “algorithmic transparency,” since algorithms can contain “embedded assumptions that lead to adverse impacts that reinforce inequality.”

Here are a few good stories that have contributed to our understanding of this relatively new field.

“Websites Vary Prices, Deals Based on Users’ Information
Wall Street Journal
December 24, 2012

The Journal staff (including Julia Angwin, now a reporter at ProPublica) showed that Staples was giving online customers different prices for the same products depending on how close those customers were to competitors’ stores. Offering different prices to different customers is not illegal, the article points out. “But using geography as a pricing tool can also reinforce patterns that e-commerce had promised to erase: prices that are higher in areas with less competition, including rural or poor areas. It diminishes the Internet’s role as an equalizer.”

“Chicago Police Use ‘Heat List’ as Strategy to Prevent Violence
Chicago Tribune
August 21, 2013

Chicago’s police department is at the forefront of “predictive policing” — the idea that police can prevent crimes using a combination of mathematical analysis and careful interventions. Chicago’s “heat list” analyzes residents’ social networks and criminal records to identify people who are most at risk of either perpetrating or falling victim to future violence. (A TechCrunch piece this year discussed some of the thorny problems of bias that this raises.)

“When Algorithms Discriminate
New York Times
July 9, 2015

A recent Carnegie Mellon study found that Google was showing ads for high-paying jobs to more men than women. Another study from Harvard showed that Google searches for “black-sounding” names yielded suggestions for arrest-record sites more often than other types of names. Algorithms are often described as “neutral” and “mathematical,” but as these experiments suggest, they can also re-produce and even reinforce bias.

“How Tech’s Lack of Diversity Leads to Racist Software
San Francisco Chronicle
July 22, 2015

The Internet erupted in anger after images of African Americans on Google Photos and Flickr were automatically tagged as “gorillas.” The Chronicle found two underlying issues: the data that programmers use to “teach” algorithmic software matters, and so does the diversity of the Silicon Valley companies that do the teaching. “Not enough photos of African Americans were fed into the program that it could recognize a black person. And there probably weren’t enough black people involved in testing the program to flag the issue before it was released.”

“The New Science of Sentencing
Marshall Project and FiveThirtyEight
August 4, 2015

“Risk assessment” scores are being used at different stages of the criminal justice system, to help evaluate whether defendants and inmates will commit crimes in the future. The formulas include things like a person’s age, employment history, and even the criminal records of family members. But is it fair to score people based on not only their own past criminal behavior, but on statistics about other people who fit the same profile? And should these scores be used to help determine their sentences?

“Complex Car Software Becomes the Weak Spot Under the Hood
New York Times
September 26, 2015

Volkswagen recently admitted to rigging the software in millions of its diesel cars to cheat on emissions tests. The Times points out that some new cars now contain computer software that’s more complex than the Large Hadron Collider. Along with increased convenience and safety, the endless lines of code also make it hard for regulators to keep up.

“New Tool Can Identify Soldiers Most Likely to Commit Violent Crimes, Study Shows
Los Angeles Times
October 6, 2015

Researchers analyzed hundreds of thousands of military records to create an algorithm that they say the United States Army can use to find the soldiers who are at the greatest risk of committing violent crimes. “For men, who accounted for the vast majority of both soldiers and offenders, 24 factors were found to be at play. Those most at risk were young, poor, ethnic minorities with low ranks, disciplinary trouble, a suicide attempt and a recent demotion.”

“Can Analytics Help DCF?
Boston Globe
October 7, 2015

Massachusetts’ child welfare system is considering adopting “predictive analytics” software to help caseworkers identify the children and families who are at the greatest risk of abuse. Higher “risk scores” are assigned to people with more extensive criminal records, previous drug addictions, previous mental health problems, and other factors. Critics of the plan, like the ACLU’s Kade Crockford, argue that this technology risks “disproportionately ensnaring the poor and parents of color.”

“Be Suspicious of Online Movie Ratings, Especially Fandango’s
FiveThirtyEight
October 15, 2015

In a notable example of reporters keeping algorithms accountable, a FiveThirtyEight analysis found that Fandango was skewing movie ratings upward. The site, which sells movie tickets, “uses a five-star rating system in which almost no movie gets fewer than three stars.” Confronted with these results, Fandango said that this was due to an error in its “rounding algorithm,” and promised to fix it.

This story originally appeared on ProPublica as “What We Know About the Computer Formulas Making Decisions in Your Life” and is re-published here under a Creative Commons license.

Next Story — Your Smart Home Probably Knows More About You Than You Think It Does
Currently Reading - Your Smart Home Probably Knows More About You Than You Think It Does

Your Smart Home Probably Knows More About You Than You Think It Does

by Lauren Kirchner

(Photo: Alexander Kirch/Shutterstock)

How much does your smart home know about you? That was the question that Charles Givre, a data scientist at Booz Allen Hamilton, set out to answer in a recent experiment. Givre has an account on Wink, a platform designed to control, from a single screen, his Internet-connected home devices, such as door locks, window shades, and LED lights. He wanted to learn what could be learned from his usage behavior. It turned out it was a little too much.

Recently, at a big data conference in New York, Givre presented his results. By accessing his Wink account, he (or anyone with his login information) could identify his social media accounts, the names of his devices (like “Charles’s iPad”) and his network information. An app that monitors his grill’s propane tank recorded the tank’s latitude and longitude, thus revealing the exact location of his house. From his Nest thermostat, he could figure out when his house was occupied and when it was not.

“I think consumers need to understand that their relationship with their devices is fundamentally going to change.”

The goal of his experiment, Givre said, was not to demonstrate security flaws in his devices, but to document the wealth of information that they amass through everyday use. To access his usage history, some accounts required verification keys; others only asked for Givre’s email address and password. He wrote programs to ping his devices to gather new information about what was going on in his home in real time, and to find patterns there. He noted that his smart devices seemed to transmit information securely on its way to the companies’ servers, “but most of the interesting stuff was in the cloud anyway.”

As the trend toward networked smart homes and connected cars continues, security precautions are more important than ever. The Federal Trade Commission put out a report this year with best practices about how companies should notify their customers about data retention. Device makers say that customers can opt in or out of sharing their personal information with developers and third-party apps. But customers may not always be aware of just how much information their devices are collecting about them in the first place.

The account for Givre’s “Automatic” device, which plugs into his car and tracks its trips and performance, included his car’s vehicle identification number, with which accident and ownership history is easily accessible. He had also hooked his Automatic account to the Web-based service IFTTT (“If This Then That”), which connects smart devices with shortcuts and triggers like “when the ‘Automatic’ device senses my car is home, turn on the lights.”

Interconnectedness, while convenient, is a trade-off. This portion of the experiment demonstrated how someone could leapfrog from one less-secure account to other accounts with more sensitive information. IFTTT collected his individual car trips in spreadsheets — including times, locations, and even the exact routes he had taken — and protected this information only with an email address and password.

“If you were to start aggregating this over time, you could get a frighteningly accurate picture of pretty much where I am at any given time of day,” Givre said.

In fact, this data could also help build a character profile of someone. At the conference, Givre showed a graph of his car-trip frequencies by day of the week; there was a noticeable lack of activity on Saturdays. Why could that be? “I don’t roll on Shabbos,” Givre said, quoting the Big Lebowski.

When asked about Givre’s findings this week, a spokesperson from Wink emphasized that each customer can only access his or her own account information. “Users should not share their passwords with others or grant access to untrusted applications,” he wrote. A spokesperson from Nest wrote, “Customers have complete control” over what types of information developers would have access to, “and can stop sharing at any time.”

Buckley Slender-White, a spokesperson from Automatic, said Givre’s car’s VIN was only accessible to the app because Givre had opted to share it. As to Automatic’s sending his car trip information to IFTTT, Slender-White said, “importantly — that data is only accessible to the user and any app that they explicitly grant permission to.” Wink, Nest, and Automatic address security and privacy concerns on their websites and suggest best practices to keep account information safe. (Attempts to reach the grilling app and IFTTT were unsuccessful.)

Smart home devices are part of an industry called the Internet of Things, which attaches data-collecting sensors to objects in order to track, measure, or remote-control them. While the technology involved is not new, the industry is still young. Last summer, Ben Kaufman, the founder of Wink’s former parent company Quirky, told the New York Times that the Internet of Things is “still for hackers, early adopters and rich people.” But the industry continues to grow. “I think consumers need to understand that their relationship with their devices is fundamentally going to change,” Givre said.

This post originally appeared on ProPublica as “Your Smart Home Knows a Lot About You” and is re-published here under a Creative Commons license.

Next Story — The Best Defense Is a Good Offense: The Department of Justice Is Challenging Local Public Defense…
Currently Reading - The Best Defense Is a Good Offense: The Department of Justice Is Challenging Local Public Defense…

The Best Defense Is a Good Offense: The Department of Justice Is Challenging Local Public Defense Programs

by Lauren Kirchner

Department of Justice headquarters in Washington, D.C. (Photo: blvdone/Shutterstock)

Shortly before Attorney General Eric Holder announced his resignation last September, he told an interviewer: “Any attorney general who is not an activist is not doing his or her job.” One of Holder’s more activist initiatives received attention recently when the New York Times highlighted how Holder’s Justice Department began the novel practice of filing arguments in state and county courts.

“[N]either career Justice Department officials nor longtime advocates can recall such a concerted effort to insert the federal government into local civil rights cases,” Matt Apuzzo wrote for the Times.

The agency has used so-called “statements of interest” to file arguments in existing court cases — sometimes cases brought by the ACLU, Equal Justice Under Law, or other advocacy groups. One issue that’s garnered particular attention from Justice Department lawyers is fair access to legal defense, a right guaranteed by the Sixth and Fourteenth Amendments. The DOJ’s Civil Rights Division has filed four such statements in the past two years, a time in which bipartisan support has emerged for a renewed examination of how local and state governments are providing legal representation to the poor. The department maintains that it does not take a position on the facts of the case, but it argues larger points about civil rights issues with national implications.

“It’s very difficult to explain the patchwork quilt that is the right to counsel in America.”

“It’s very much like having an amicus brief, but it’s an amicus brief by the United States Department of Justice,” said Norman Reimer, executive director of the National Association of Criminal Defense Lawyers. “That carries a lot of weight. No municipality or state wants to be found to be violating Constitutional rights in the eyes of the Justice Department.”

As the Times story shows, local prosecutors and defense attorneys for the cities and states that suddenly come under this national microscope may not appreciate the attention, however. Nor do they necessarily agree with the Justice Department’s premise that it is not taking sides in the cases at hand. Scott G. Thomas, the attorney who defended Burlington, Washington, in a suit challenging the city’s indigent defense program, objected to the way the case turned Burlington into a political symbol, telling Apuzzo, “it’s the Department of Justice putting their finger on the scale.”

Joshua Marquis, the elected district attorney in Clatsop County, Oregon, who also serves on the executive committee of the board of directors of the National District Attorneys Association, considers problematic indigent defense systems more episodic than epidemic. “The idea that this is somehow symptomatic of some sort of major civil rights emergency in America is just plain crazy,” he said. Where smaller jurisdictions lack funding for indigent defense, it follows that the prosecutors in those same jurisdictions lack funding too. “To me, that’s just as dire a problem,” said Marquis, “and since, frankly, most victims are poor people and people of color, I would be really impressed to see the United States Justice Department pick that up.”

The Supreme Court ruled in the 1963 case Gideon v. Wainwright that each state had to establish means of representation for defendants who couldn’t afford it themselves. But the federal government only provides best practices, grants, and training; it’s left to the states to decide how to interpret Gideon’s mandate and how much money to allocate to it. Some states leave the decisions about indigent defense and funding for it entirely to counties. As a result, the quality of one’s counsel heavily depends on the location of the alleged crime.

“It’s very difficult to explain the patchwork quilt that is the right to counsel in America,” said David Carroll, executive director of the Sixth Amendment Center, an advocacy group for indigent defense. “People watch TV cop dramas, where everyone asks for a lawyer in police lockup, and they come back from commercial break, and there’s the lawyer…. The difference between what they believe and what’s actually happening is very broad.”

The gap between what many Americans consider to be adequate defense, and the reality on the ground in local courts, is what advocates say these lawsuits seek to close. The potential remains for many more investigations and filings, as well. “The DOJ could almost take a dart, and throw it at a map, and there would be a problem with indigent defense in that particular place,” said Ernie Lewis, executive director of the National Association for Public Defense. “And I don’t think I’m exaggerating.”

Here are the jurisdictions where DOJ lawyers have filed statements of interest in cases addressing indigent defense:

WASHINGTON (CITIES OF MOUNT VERNON AND BURLINGTON)

In an August 2013 statement of interest in Wilbur v. City of Mount Vernon, the Justice Department asked a federal court in Washington to appoint an “independent monitor” to oversee new reforms to the indigent defense system there. This was the first statement of interest of this kind, and advocates say it had a huge impact — in signaling that the Justice Department was going to enforce this issue in a new way, and in tangible changes to the Washington system as well. The judge in the case “took it and really ran with it, and there’s big changes now happening all across Washington,” said the Sixth Amendment Center’s Carroll.

In the conclusion of his decision, which refers to the 1963 ruling in Gideon, U.S. District Judge Robert S. Lasnik wrote: “The notes of freedom and liberty that emerged from Gideon’s trumpet a half a century ago cannot survive if that trumpet is muted and dented by harsh fiscal measures that reduce the promise to a hollow shell of a hallowed right.”

NEW YORK

Back in 2007, the New York Civil Liberties Union filed a suit on behalf of 20 defendants against the state of New York, arguing that five counties were denying effective counsel to indigent defendants. Ontario, Onondaga, Schuyler, Suffolk, and Washington counties did not have a public defense system or standards in place at the time; they had just contracted with private attorneys on an ad-hoc (and apparently inadequate) basis. The Justice Department joined the suit with a statement of interest in September 2014. A settlement followed within weeks, mandating the creation of a new public defense office, standards for defendant eligibility, and more state funding for the attorneys.

ALABAMA (CITY OF CLANTON)

With its statement of interest in February of this year, the Justice Department joined a lawsuit against the city of Clanton for its practice of setting bail without regard for a defendant’s flight risk or ability to pay. Christy Dawn Varden, a plaintiff in the case, was arrested for shoplifting at Walmart, and a judge assigned her a $2,000 bond — $500 for each of Varden’s four misdemeanor charges. Living on $200 a month in food stamps, Varden could not pay the bond, and so stayed in jail. “By taking action in this case, the Justice Department is sending a clear message: that we will not accept criminal justice procedures that have discriminatory effects,” said Holder in a statement. “We will not hesitate to fight institutionalized injustice wherever it is found.” As a result of the case, city officials agreed to reform the way it assigned bail.

GEORGIA

In March, the Justice Department filed a statement of interest addressing the rights of juveniles accused of delinquency in Georgia. The complaint alleged that officials were denying the juvenile defendants’ right to counsel, by encouraging the children to waive a right that they didn’t really understood they had. It argued that these young defendants were subject to “assembly line justice”; acting Assistant Attorney General for the Civil Rights Division Vanita Gupta said “The systemic deprivation of counsel for children cannot be tolerated.”

This post originally appeared on ProPublica as “The Best Defense Is Good Offense: DOJ Challenges Local Public Defense Programs” and is re-published here under a Creative Commons license.

Sign up to continue reading what matters most to you

Great stories deserve a great audience

Continue reading