Why Criminal Justice Too Often Goes Criminally Wrong

Although everyone grasps in a general way that “stuff happens” in the U.S. criminal justice system, what most Americans don’t see is that once things do go wrong there is an elaborate architecture in place to guarantee that things will stay wrong.

Why Criminal Justice Too Often Goes Criminally Wrong

Suppose you run a system that provides some service or produces some product. And suppose your enterprise is dangerous: it could kill innocent people by mistake, and it can wreck lives even when it doesn’t kill.

It can destroy families—psychologically maim children, tear the hearts out of parents. It can also leave fatal dangers unaddressed.

Suppose your system relies on the performances of lots of humans, and you recognize that error is a permanent element of the human condition.

You’d be thrown back on Safety pioneer James Reason’s observation that although we can’t change the human condition, we can change the conditions under which the humans work.

Your hope would be that you can make your system “resilient.”

Of course, you would work as hard as you could to avoid errors where that is possible. But you would also try to design your system to absorb the impact of those errors that will still inevitably occur—to gracefully extend the system’s safe performance and to encourage recovery after harmful mistakes and violations.

You would be desperate to avoid “brittleness”—the system condition in which every error (you know they’re coming) automatically leads to a shattering catastrophic collapse.

To tolerate brittleness in this context would seem dangerous and immoral.

The American criminal justice system does more than tolerate brittleness. It relentlessly builds brittleness into its design.

Loving Brittleness

American criminal justice comprises a complex system that operates through imperfect humans—cops, forensic scientists, prosecutors, defenders, judges, probation officers—who inevitably make errors that can trigger fatal consequences.

Most people have seen many reports of exonerations by now and recognize that things can go wrong early in investigations.

Even routine television cop shows will show many wrong paths that their heroes could have (although they don’t) blundered down. Caseload pressures are high; circumstances are confusing; information is scarce and hard to uncover; motives are mixed.

A horrific parade of officer-involved deaths has shown us how a mistake in hiring, or in training, or in dispatch, or in the mental health care provided to a “suicide-by-cop” victim, can set a fatal encounter in motion.

But although everyone grasps in a general way that “stuff happens” what most Americans don’t see is that once things do go wrong there is an elaborate architecture in place to guarantee that things will stay wrong.

The people who run our criminal justice system don’t just tolerate brittleness; they avidly seek it, and they augment it. The fact is, we have at least as many rules and practices designed to ensure brittleness as we have to promote safety.

Swiss Cheese: All Hole and No Slice

James Reason’s central contribution to thinking on Safety was the recognition that a “person-based” approach to accident investigation that searches for and disciplines “bad apples” and then stops there is inadequate.

Reason understood that a nuclear power plant failure—or, I’d add, a wrongful conviction—has to be seen as a system failure, an “organizational accident.”  Small slips and violations, all necessary for the tragedy, but no single one independently sufficient to cause the event, combine with each other and with latent system weakness and then—but only then—the tragedy occurs.

Reason’s key perception was that the latent weaknesses, unlike the active errors and violations committed by “bad apples,” can be found in advance.

Find a weakness, and you have an opportunity for prevention. Even if your discovery is too late to prevent the first accident, at least recognizing a weakness in an event review can forestall the second.

Reason’s thinking evolved over the years, and it has been criticized on a variety of grounds by contemporary Safety researchers. It has been over-simplified and misapplied by some of Reason’s admirers too.

Still, his influence extends to many people who have never read Reason but have seen (or heard of) the simple, powerful, graphic presentation he developed of his “Swiss Cheese Model.”

In the criminal justice folk version of this model of system failure, an arrow representing a hazard (say, an eyewitness identification issue) moves in a straight line from left to right until it reaches its tragic conclusion in a wrongful conviction or mistaken execution.

But between the hazard and the tragedy there stand a series of barriers, in a design that seems to constitute a formidable architecture of “defense-in-depth.” The hazard must pass through a police screen, a forensics screen, a prosecution screen, a grand jury screen, a defender screen, a jury screen, and an appellate court screen before it takes its awful effect.

The screens aren’t presented as perfect. Like slices of Swiss Cheese, they have holes—defects, weak spots. When the holes in the slices happen to line up, the hazard’s path to destruction is clear. When the holes don’t line up, or when even one of the barriers is perfectly solid, the hazard is blocked, and the disaster is prevented.

In criminal justice, this elaborate sequence of barriers is effectively a Potemkin Village, a pasteboard facade. All the barriers are chronically underfunded and subject to enormous pressures from caseloads, politics, and media. The forensic science capacity is limited or non-existent; the training scarce; the defense function hollowed out; the trial bench determined to see no evil. Cognitive biases such as tunnel vision are rampant.

Besides, it isn’t true that fixing a single “slice” internally will guarantee Safety, because many of the barriers are punching holes in their neighbors. (For example, the prosecution’s decision to hide exculpatory evidence can bore a hole in the defender barrier.)

And it isn’t true that these corrosive powers operate in only one direction. “Upstream” police activities can hamstring prosecutors, but “downstream” court pressures influence “upstream” protections too.

Spend 15 minutes reading histories of exonerations or wrongful release cases and you’ll see that when it comes to blocking errors, we’re deploying a long succession of slices that are mostly hole, and not much cheese.

Error’s path through the criminal justice Swiss Cheese Model looks like a nice, wide, thruway—there’s plenty of maneuvering room.

But that’s not today’s rant, we’ve looked at wrongful convictions before.

Correcting Error: All Slice, With No Holes

Turn the Swiss Cheese chart around. What if we were looking for some resiliency—for a path toward correction of an error and mitigation of the harm. Yes, the cops made a mistake, and we didn’t catch it right away. Yes, the life sentence imposed on the juvenile was wildly extreme.

Now, we want to recover.

That vector faces a succession of barriers too. But the slices that arrow encounters will be very different.

Through a complex, extensive and multiplying array of time limitations, contemporaneous objection rules, waiver criteria, burdens of proof, and other devices, our system does everything it can to make sure that every slip and mishap takes its full fatal course without interruption or correction.

Convict the wrong man, or impose the wrong sentence, and an elaborate and constantly bolstered set of fortifications protects your error.

You couldn’t ask for a more blood-curdling example of this than the one provided in Seth Freed Wessler’s account in ProPublica of the history of capital defendant Eugene Clemons, a man with a well-documented history of childhood abuse, and elementary school findings of “educably mentally retarded” status.

Reviewing Clemons’ story shows that the “machinery of death” really isn’t a “machine” in the usual sense. It is not a Newtonian arrangement of gears and switches in which mechanical causes yield inevitable effects in linear sequences.

The death penalty process isn’t a complicated machine, like a jet airliner at rest. Like a jet airliner in operation the capital punishment process constitutes a complex adaptive system. It isn’t explained by “causes” that trigger uniform results; its human operators are trying to make sense of a swarm of overlapping, cascading, and often conflicting or misleading conditions and influences that don’t produce predictable effects, but that do warp the probabilities. Everyone’s work affects everyone else’s.

Eugene Clemons’ journey toward the execution chamber has required an astonishing Homeric catalogue of individual performance failures and—more importantly—abiding system weaknesses.

But any movement toward absorbing and correcting the central error has been blocked by an elaborate, Rube Goldberg set of legislative and judicial screens.

Clemons’ trial defenders failed to raise his history of childhood abuse and his crippling mental health challenges. But that failure did raise a barrier against correction on direct appeal.

There is a tiny aperture for collateral attack. But that was blocked by another screen because an appellate lawyer (may have) neglected to file a document; the court clerk lost a document that was filed behind a cabinet. During that process a filing fee wasn’t paid (or as was customary, waived.) The 1996 Anti-Terrorism and Effective Death Penalty Act (AEDPA) sets a one-year limit for federal habeas corpus reviews. Too late; no review.

The missing filing and the unpaid filing fee filing interposed impenetrable time barriers across the path toward correcting the error.

No court will consider Clemons’ claims. The system shattered at the first error, and it won’t be put back together again.

Brittleness wins.

The Comfort of Judges

The pious euphemism that advocates for this horrifying arrangement prefer is “finality.” They claim the opposite of “brittleness” is “rehashing old battles that should already have been resolved.”

In fact, as the Clemons case shows, invoking AEDPA and similar procedural rigmaroles doesn’t avoid rehashing old battles; it circumvents having to fight (or rule on) important battles in the first place.

Before AEDPA’s draconian time-bars were put in place, 40 percent of death penalty cases that were considered in federal habeas were reversed.

To assign responsibility of brittleness to public demand for “finality” is a craven evasion. No ordinary American wants to see someone with unpresented claims executed because a clerk lost a pleading behind a cabinet.

The Safety the politicians, prosecutors and judges have in mind here is not the public’s Safety; it’s their own.

In the case of the judges, the issue isn’t even their Safety; it’s their comfort—or maybe just their leisure. Their old mistakes can’t be reviewed; they won’t be challenged to make new rulings that might later be exposed as mistaken.

President Joe Biden, then a Senator, (and once, briefly, a public defender) voted for AEDPA.

But a year later, Sen. Biden introduced legislation aimed at repealing the AEDPA’s time bar for federal habeas filing.

james doyle

James M. Doyle

Biden saw then the dangers of system brittleness when lives are at stake.

He should remember those dangers now, move to repeal AEDPA, and restore resilience in our criminal justice processes.

This is a big, central question. It’s a moral question too.

Additional Reading: California Pushes for ‘Bad Apple’ License to Remove Bad Cops, The Crime Report, Sept. 14, 2021.  

James M. Doyle is a Boston defense lawyer and author, and a regular columnist for The Crime Report. He enjoys hearing from readers.