Should Los Angeles County Predict Which Children Will Become Criminals?

One major difference separates troubling Minority Report policing programs from what happened in L.A. County’s child welfare system.

One of the primary goals of Los Angeles County’s child welfare system is keeping kids out of lock-up. But in this pursuit, the county took a surprising step: It used a predictive analytics tool as part of a program to identify which specific kids might end up behind bars.

The process wasn’t incredibly complicated: It involved analyzing data about a child’s family, arrests, drug use, academic success, and abuse history. But the goal was abundantly clear: separating out the good kids from the potentially bad.* 

Though the program—which was officially dubbed the Los Angeles County Delinquency Prevention Pilot, or DPP—ended in 2014, a new report from the National Council on Crime and Delinquency looks into how the program functioned. It not only suggests that L.A. County’s strategy was on the right path, but also that more government agencies should consider testing similar programs all over the country.

“When you connect these database lists with the power of social services through the court system, even if it’s through the greatest most benign good in the world, you are putting the coercive power of the state on individuals.”

The report might be right, but the DPP raises some troubling issues.

When people talk about predictive analytics—whether it’s in reference to policing, banking, gas drilling, or whatever else—they’re often talking about identifying trends: using predictive tools to intuit how groups of people and/or objects might behave in the future. But that’s changing.

In a growing number of places, prediction is getting more personal. In Chicago, for example, there’s the “heat list”—a Chicago Police Department project designed to identify the Chicagoans most likely to be involved in a shooting. In some state prison systems, analysts are working on projects designed to identify which particular prisoners will re-offend. In 2014, Rochester, Minnesota, rolled out its version of L.A. County’s DPP program—with the distinction that it’s run by cops, and spearheaded by IBM—which offered the public just enough information to cause concern.**

“It’s worrisome,” says Andrew G. Ferguson, a law professor at the University of the District of Columbia who studies and writes about predictive policing. “You don’t want a cop arresting anyone when they haven’t done anything wrong. The idea that some of these programs are branching into child welfare systems—and that kids might get arrested when they haven’t done anything wrong—only raises more questions.”

Ferguson says the threat of arrest poses a problem in all the most widely reported predictive programs in the country. But he acknowledges that there are valid arguments underpinning all of them.

“The public health model of identifying risk factors in an environment that could bring people into the criminal justice system is a good idea,” Ferguson says. He notes that Chicago’s heat list got its start in academia after sociologists such as Yale professor Andrew Papachristos started thinking about crime and gun violence as though they were bloodborne pathogens or disease epidemics. “They looked at these public-health models and said, ‘This isn’t rocket science. We can identify the people who are most at risk of getting shot, and we can figure out what the risk factors are. Maybe if we get in early—kinda like if we kept you from eating bad foods and smoking we could prevent heart attacks and lung cancer—we can have fewer shootings.’”

It makes sense that such scholarship might match well with social work, Ferguson says—that he can see why one might try to locate people likely to be involved in violence and offer social programs or guidance that might lead people in a more positive direction. What’s strange, Ferguson says, is that in chasing after that good idea, academics in Chicago then began partnering with cops to seek out bad seeds.

Papachristos’ work became the basis for a partnership between a biomedical engineer at the Illinois Institute of Technology and the Chicago Police Department’s top tech cop. Using the lessons Papachristos had learned by studying crime and gun violence in Chicago, and by then combining it with police data, they created a powerful algorithm. The folks it spat out were then visited by social workers offering services such as job placement assistance. But they were also visited by Chicago cops.

By combining academia with law enforcement, “the public health model [was] co-opted by the police,” Ferguson says. “It isn’t just a social worker knocking on your door, it’s a detective [saying,] ‘We know who you are, we know what you’re involved with, we want you to talk with the social workers, but, recognize, if we catch you again, we’re going to bring down the hammer.’”

That’s not what happened in L.A. County, says Shay Bilchik, founder and director of the Center for Juvenile Justice Reform at Georgetown University, which helped the NCCD assess the DPP program there.

In L.A. County, the child welfare system is very crowded; there were nearly 21,000 kids in foster care there in 2014—a figure that had increased almost 11 percent since 2011. When social workers usher children into that system, those workers are often overburdened. And because they’re overburdened, they often miss or ignore factors that might indicate a child should receive special attention.

As a way to address that—to make sure that social workers didn’t ignore those factors—the DPP deployed a 10-factor actuarial screening assessment, an analytics tool that automatically monitored information such as: Was he or she abused before entering the system? Does he or she use drugs? Does the kid have any positive adult role models? The NCCD used pre-existing L.A. County data to design this program, as the organization’s communications director Erin Hanusa explains it, so that it would account for the factors that “bore the strongest relationship to juvenile justice system involvement.” L.A. County analyzed three adolescent cohorts, ranging in size from 70 to 83. The subjects came from four L.A. County districts—Compton, Glendora, Palmdale, and South County—and were between the ages of 10 and 17.*

That’s when the predictive analytics came into play. A service called SafeMeasures continuously applied the screening assessment to incoming social services data in order to “identify whenever a pattern of case data matching the high-risk pattern came up,” Hanusa says. The program sent out alerts to L.A. County caseworkers whenever it seemed as though a kid might be at risk of committing a juvenile offense.* 

NCCD owns and sells subscriptions to “SafeMeasures,” an analytical tool that’s typically used to make sense of huge masses of data and to create reports that could then be useful to social service agencies. It might look at a social worker’s caseload and send that worker an alert when he or she hasn’t checked in on a client in a long time. It also might tell a social worker which foster children are potentially within close proximity to active wildfires. Subscriptions to SafeMeasures can run about $330,000 per year, but Hanusa says L.A. County is paying a little less than $105,000 for the service.

Hanusa dismisses the idea that NCCD might have a conflict of interest in assessing the DPP. “NCCD is not assessing a program that NCCD sells services to; therefore, a question of conflict of interest doesn’t apply,” she says. “DPP was an initiative to determine if existing data could be used to identify youth at risk of crossing from the child welfare system to the justice system and thereby enable L.A. County to offer direct services to prevent this.” The implication being that while NCCD may earn revenue from providing a general SafeMeasures subscription to L.A. County, the relationship didn’t play a role in how SafeMeasures was specifically used in the DPP. “NCCD does not provide direct services,” she says.

At the end of the day, cohort three had the smallest number of arrests. But it was difficult to determine whether the DPP program was responsible for that, or whether outside factors played a role. 

The DPP might not even sound necessary. As Ferguson put it: “Do you really need a computer to tell you that a kid who used drugs in the past might use drugs in the future?” But Bilchik offers some context.

For many years, child welfare systems identified three core priorities for guiding foster children toward functional adulthood: permanency, safety, and well-being. Safety and permanency hogged 98 percent of the attention, Bilchik says, until about five years ago when professionals started looking more closely at well-being. “What they realized,” he says, “is that when we ignore well-being, we miss the mark.”

“We’re talking about workers who are generally plagued by high caseloads,” Bilchik continues. “So, they’re rolling from case to case. And they’re rolling in a way that is predominantly about safety and permanency. The analytical tool allows the system to do a scan and say, from senior management down to mid-level supervisors, ‘Here are 25 kids that our scan shows may be at higher risk of crossing over [from the child welfare system into the juvenile justice system]. Let’s pull those files and take a look to see if we’re missing something.’”

What the workers may be missing could be something as simple as choir practice, Bilchik says.

For instance: A lot of child welfare systems will contractually obligate a caregiver to take a minor to school at 8 a.m. and then take them back home at 3 p.m. But after-school activities generally happen after 3 p.m, and they can be crucial to forging positive relationships. “So you’ll have people that say, ‘I know you love lacrosse. I know you want to be in the band or in the choir. But I’m not required to take you to those activities and I’m not doing it,’” Bilchik explains. If the analytical tool doesn’t identify a kid as being at risk, a case worker might not ask extra questions, might not realize that the caregiver’s contract should be amended to let the kid stay at school until, say, 4 p.m., or whenever choir practice ends.

“Our goal is to get them as close to a normal life as possible,” Bilchik says. “That’s when we have the best outcomes.”

Did the DPP program actually work?

It’s unclear. The NCCD report shows that each of the three cohorts encountered problems that prevented assessors from definitively gauging whether the program worked or not.

The first cohort of kids was the baseline: Between October and December 2012, the program was just getting started. The actuarial screening assessment had been developed and workers did their best to collect information on 83 kids, as well as those kids’ institutional records from schools and juvenile justice programs. But the workers had trouble getting complete information. “[T]he collection of service delivery data was limited and workers found it challenging to obtain data from providers or other service delivery entities, such as the public schools,” the NCCD writes.*

Cohort two was the control: Between the end of January and the beginning of May 2013 77 kids were provided “services as usual.” They were not subjected to the data screenings.* 

Cohort three was the real test: From January 2014 through early May 2014, the program analyzed data on 70 kids. The SafeMeasures analytical program automatically sent out alerts for at-risk kids, as it was designed to do. Workers then intervened in whatever ways they could to try and alleviate stressors, get them to after-school activities where applicable, and generally make sure those kids didn’t end up in trouble.* 

At the end of the day, cohort three had the smallest number of arrests. But it was difficult to determine whether the DPP program was responsible for that, or whether outside factors played a role. Here’s how the NCCD report explains it: “NCCD staff cannot say that the interventions provided to cohort three are responsible for the lower outcomes because we do not have sufficient service delivery data to determine whether the interventions provided to cohort three were different from or better than the interventions provided to the other two cohorts.”

The NCCD report did suggest, however, that the DPP program should continue:

The program design has a strong theoretical basis…. Recidivism was less prevalent among cohort three than among any other cohort. Findings to date suggest that with increased fidelity to a comprehensive implementation plan, adequate and appropriate services to meet youth and family needs, and agency and staff commitment, it is reasonable to expect that a rigorous long-term impact evaluation might show the effectiveness of targeting interventions to youth identified as being at high risk of juvenile justice involvement.

Bilchik maintains that the difference between Chicago’s heat list and the DPP program is that the latter does not involve cops or the criminal justice system.

But it does at least seem to involve communication with the juvenile justice system, in order to track prior arrests. Ferguson remains skeptical about it—and he worries that data from programs like the DPP could eventually leak into the court system.

“When you connect these database lists with the power of social services through the court system, even if it’s through the greatest most benign good in the world, you are putting the coercive power of the state on individuals,” Ferguson says. “And if you’re going to do that, you better have processes in place to make sure [that database is] right, you better have mechanisms to challenge it, and you better have some sort of accountability and transparency so you can figure out whether it’s being misused.”

“Understanding risk factors for juvenile delinquency and involvement in the criminal justice system is a positive thing—more information is better,” Ferguson adds. “The questions are: Who controls that data, and whether it’s going to be used by law enforcement.”

Other questions stand out as well: The scope of the tests were limited to three distinct and relatively short time periods, but what happens to the data from the results of the SafeMeasures analysis? From the report, it’s not totally clear that information was expunged—or that it would be expunged if this program were rolled out at other government agencies. Furthermore: Could a cop who’s pursuing a suspect—or a prosecutor in the midst of a case—consult the data, and, in so doing, introduce it into the court system? These questions and more are unanswered by the report, but Hanusa says this wouldn’t happen. “[T]he flags used to help staff identify children at risk of subsequent juvenile justice involvement are not part of the official record,” Hanusa says. “These flag, or alert, data are updated on a nightly basis to accommodate changes in the underlying data, and eventually deleted a short time after the case is closed. The only way for this information to end up in court records would be if a staff person copied it there or otherwise made a notation.”*

Bilchik isn’t blind to the potentially negative possibilities.

“Believe me, I fear the widening of the net. I fear the labeling phenomenon,” Bilchik says, referencing the idea that once a person is labeled a juvenile offender—or a drug user, or an abused kid—that label can remain with that person. “I bring it up all the time with NCCD. But I feel we can insulate ourselves from those things in this program.”

“I look at the example of Chicago and I say, ‘If we end up there, we are 180 degrees from where we wanted to be.’”

Lead Photo(Photo: thomashawk/Flickr)

*UPDATE — January 21, 2016: This article has been updated to more accurately describe the DPP program’s methodology and expungement mechanisms. 

**UPDATE — February 2, 2016: This article has been updated to reflect that Minnesota was the state where a program similar to the DPP was conducted. 

Related Posts