While the law enforcement community widely views American jurisprudence as rich with built-in safeguards, from the right to counsel to the right not to be physically abused by police officers, citizens’ protections aren’t always up to the task. People are sometimes convicted of crimes they didn’t commit.
Righting What’s Wrong in Criminal Justice
Wrongful convictions stem from the belated entrance of scientific rigor into the field of forensics, systemic problems, and the ubiquitous ‘human factor.’ In the coming weeks, a series of stories by crime author Sue Russell looks at why convictions go wrong, at the common reluctance to rectify error, and at innovations to better safeguard justice.
Stories so far:
Analyzing the errors that led to wrongful convictions, recurring themes emerge. Steven Drizin, clinical professor at Northwestern University of Law, and cofounder of its Center on Wrongful Convictions of Youth, and social psychologist Richard Leo posit that the errors are sequential. And as they stack up, says Drizin, they “develop a momentum that is very difficult to stop.” Safeguards in the system become “like speed bumps at best. They don’t do anything to really slow down that momentum towards a wrongful conviction.”
As they describe in a 2010 paper, in the initial misclassification—“both the first and the most consequential error police will make”—law enforcement decides that an innocent person is guilty. “Coercion error” builds on this as investigators carry out “a guilt-presumptive, accusatory interrogation that invariably involves lies about evidence and often the repeated use of implicit and/or explicit promises and threats …"
The third error, ‘contamination,’ takes place when investigators feed a suspect ‘misleading specialized knowledge’: non-public details of a crime. Later, for a suspect or defendant to merely possess such “insider” information can be misread as highly incriminating.
Presuming the innocent guilty, says Drizin, often stems from flawed interrogation training. Much of law enforcement personnel’s training convinces them that they are tantamount to human lie detectors (see more on this in this series' next installment) with superior abilities to “read” guilt or innocence from a suspect’s emotional affect or body language. Deception research by social scientists like Bella DePaulo, however, show otherwise.
If detectives lock in on a suspect too early, cautions Itiel Dror of the University College of London Institute of Cognitive Neuroscience, tunnel vision kicks in along with “escalation of commitment” to their conclusions. And through confirmation bias, the brain seeks facts that confirm existing beliefs while it discounts or disregards information that conflicts.
Meanwhile, many errors in an investigation are effectively buried before a case goes to trial, says Drizin. They’re simply invisible to the types and level of scrutiny a case typically receives as it works its way through the system.
“Trial prosecutors,” he says, “who are often different than the ones who screen the cases, believe that somebody would not be innocent if they had gotten this far in the system.”
And once a suspect falsely confesses after a coercive interrogation, they’re in deep because post-confession any presumption of innocence simply dies.
When studying DNA exonerations involving false confessions, University of Virginia School of Law professor Brandon L. Garrett, author of Convicting the Innocent: Where Criminal Prosecutions Go Wrong, looked at 32 cases that went to trial and found that ‘misleading specialized knowledge’ was used to help convict innocents in 31 cases.
Drizin says he and Leo would like to demythologize confession, “to take it from its undeserved position as the queen of proofs, and get the system to recognize that it’s just another piece of evidence. And like any kind of evidence, whether it’s eyewitness evidence or physical evidence or forensic evidence, it can be contaminated.”
The danger, he and Leo have written, is that prosecutors, defense attorneys and judges alike can acquire “a tainted perception of all exculpatory evidence, and simply ignore the possibility that a confession may be false.” And they’ve advocated for pre-trial reliability hearings during which a confession’s integrity and trustworthiness are carefully scrutinized by a judge before jurors are impaneled.
Pre-trial hearings are already used to weigh other kinds of evidence. For example, federal courts and some state courts weigh expert witnesses’ credentials and how credible, reliable and reasonable they are before they are allowed to testify.
All these errors—and more besides—bolster the increasingly loud calls for the mandatory video or audio taping of interrogations in their entirety. Drizin believes Garrett’s recent research on contamination is likely to be “the straw that breaks the camel’s back for law enforcement” who resist recording.
However, once in the courtroom, much can still go wrong for an innocent defendant.
Lawyers’ and expert witnesses often exaggerate about evidence and overstate certainty, perhaps calling hair or tire tracks gathered from a suspect and a crime scene a “match,” when they would be more accurately described as merely being consistent with one another. Dror finds that expert witnesses are vulnerable to over-confidence—and jurors are impressed by experts who draw firm conclusions.
“So,” says Dror, “if I am willing to say, ‘Yes, that is 100 percent match on the fingerprint’…my conviction in that statement means more to the jury than my resume, which actually would tell them I’m not even a fingerprint examiner.” Jurors need to make a definite decision to render a verdict, “And an authoritative-sounding expert helps them. That is very convincing psychologically. It gives them what they need.”
Law professor David Faigman, director of the University of California San Francisco/Hastings Consortium on Law, Science and Health Policy, contends that another key issue that must be addressed is the need for today’s criminal lawyers to understand science to do their jobs properly. He co-teaches a class called Scientific Methods for Lawyers.
Faigman says that prior to 2009’s Strengthening Forensic Science in the United States: A Path Forward, a National Academies of Science /National Research Council report which spotlighted problems with forensic science, he and Michael J. Saks of Arizona University College of Law were part of a small population to have given forensic science the kind of serious scrutiny that had been leveled at polygraph examinations for more than 20 years.
Studies in the mid 1980s and again in 2002 highlighted polygraphs’ weaknesses. And while they’re a common investigative tool, they are kept out of courtrooms because they don’t reach the standard of scientifically acceptable evidence. The U.S. Supreme Court allowed them to be banned from courtroom use in 1998, ruling that it was reasonable to view the tests as not sufficiently accurate.
Faigman vividly recalls Justice John Paul Stevens’ response: “Well, polygraphs may not be perfect, but they’re at least as good, if not a whole lot better, than a bunch of the forensic sciences that get in as a matter of course.”
Now that it is forensic science’s turn under the magnifying glass it begs the question: Why does bad forensic science get in while polygraphs did not? Faigman believes it is because forensic evidence is predominantly offered by prosecutors, and most judges are either former prosecutors or certainly sympathetic to their needs. There also is a perception that many prosecutions would founder without it.
“That doesn’t mean you need to allow it at the level that it's allowed,” he says. “But they're probably correct that a lot of prosecutions would collapse. In fact, I had a judge once yell at me because I suggested that arson investigation might not meet the Daubert [admissibility] standard. He got up and he said, 'Well, if we exclude all the arson evidence, then none of these prosecutions will be able to go forward.' And of course, my response was, 'Well, if it's junk, then maybe they should not go forward.' And he didn't think much of me, I don't think.”
Many of forensic science’s challenges arise because it evolved organically to meet crime-solving’s everyday needs, much of it developed and practiced by police officers and lay persons. Putting science into it is critical, as the National Academies of Science report brought into sharp relief.
Hair, fiber and blood spatter analysis are among the categories of evidence now under review after revelations in the Washington Post in April that many defendants and their attorneys were not notified when officials learned that flawed forensic work had raised questions in their cases. The FBI failed to broaden its review of past convictions when warnings were issued and red flags appeared suggesting that problems could be widespread. Kirk Odom’s case—convicted on hair evidence of a sexual assault, cleared by DNA—was a big one.
But last month, the Washington Post reported that the FBI was embarking on its largest ever post-conviction review and would scrutinize all cases since 1985 and perhaps further back worked on by FBI Laboratory hair and fiber examiners.
Two days later, the FBI issued a statement clarifying that in their view there is “no reason to believe the FBI Laboratory employed ‘flawed’ forensic techniques” and that microscopic hair comparisons were still being conducted. With the review, they and the Department of Justice “are committed to undertaking a review of historical cases that occurred prior to the regular use of mitochondrial DNA testing to ensure that FBI testimony at trial properly reflects the bounds of the underlying science.”
The Innocence Project promptly announced that it and the National Associate of Criminal Defense Lawyers would help with the review. The latter’s then president elect Steven D. Benjamin told the Washington Post that it was “an important collaboration” and a departure from one-sided government reviews that left defendants in the dark.
At Seton Hall University’s law school, D. Michael Risinger addressed forensic science’s “problem children” in a 2009 paper looking at the NAS recommendations’ future. As he wrote:
“The principles relied upon by such techniques are not the products of science, as that term is currently understood, but rather the product of a kind of commonsense generalization derived from experience with the subject matter under examination. Neither the generalizations so derived nor the accuracy of the results arrived at by the practitioners of these disciplines, have ever been subject to the kind of systematic testing that has come to be expected as a part of anything calling itself science. This does not mean that the results arrived at are necessarily always in error, but simply that we have no very good evidence about when they are likely to be in error, and when they are likely to be accurate.”
While the infallibility of forensic science is fetishized on TV procedural dramas like the “CSI” series, old gold standards of forensic conclusiveness have tarnished. Longtime methods for divining proof of arson have been cast out as junk science, and those for “matching” a spent bullet to a specific box of ammunition abandoned. Forensic odontology, which purported to match bitemarks in human flesh to a suspect’s teeth, has now been relegated to the history books.
Catastrophic contamination in crime labs also has shaken faith in the system. In Texas and North Carolina, for example, labs were shut because standards and methodologies were either non-existent or egregiously flawed.
A scathing audit by North Carolina’s State Bureau of Investigation released in August 2010 determined that analysts had overstated or misstated blood test results between 1987 and 2003. North Carolina’s News & Observer revealed that the bureau bent rules past the perimeters of accepted science to build up prosecution cases in as many as 230 known cases.
Some crime lab safeguards are relatively cheap and simple to introduce. For example, sequential unmasking—criminal justice’s answer to science’s “blind” tests—that shields forensic examiners from potentially biasing information until after they’ve made their initial judgments.
Special Assistant District Attorney David Angel was instrumental in implementing reform in Santa Clara County, California where law enforcement agencies have had a formal policy and written protocol for a decade to govern eyewitness identification and lineup procedures. It includes sequential double-blind lineups and is now considered a model. All interviews and interrogations also must be taped or investigators must document their reasons for not doing so.
Angel and then D.A. George Kennedy “thought at the time that everyone was going to end up doing this within short order,” Angel recalls. “It’s taken so long for it to percolate up. Honestly, I’m kind of stunned.”
Indeed, to this day, many jurisdictions still lack standardized, written policies on police lineup and identification procedures.
Five years ago in Texas’ Dallas County, District Attorney Craig Watkins pioneered the creation of a “conviction integrity unit” in a district attorney’s office. More are springing up; Santa Clara County followed suit in 2011 with the unit Angel heads.
Angel firmly believes that better-late-than-never justice should be open to prosecution and defense alike, and would rather focus on reform than blame. He teaches a wrongful conviction class alongside Kathleen ‘Cookie’ Ridolfi, director of the Northern California Innocence Project, at the Santa Clara University School of Law.
Righting a wrong is, he says, “such a potent moment.” Although 99 percent of innocence claims might seem unlikely or implausible, there’s always the thought of that 1 percent, he says, “and I think it keeps you going.”