Archive for the ‘Failed Forensics’ Category

In today’s world, we think of DNA matching as the gold standard of identification: a scientifically precise method of matching human tissue or fluids left at a crime scene to a particular individual.  But in an op-ed article last week, High-Tech, High-Risk Forensics, published in the New York Times, Cal Hastings professor of law Osagie Obasogie reminds us that even DNA identification isn’t foolproof.  It’s an excellent point and one worth remembering.  But then he stumbles over the details of DNA identification, labeling it high risk, because many people might have the same DNA profiles.  Risk is always present, but is the risk of getting DNA identifications high because many people might have the same DNA profile?  No.

Professor Obasogie tells us the story of a man arrested for a murder near San Jose, California, when his DNA was found on the victim’s body.  Only after the man was arrested and jailed did his alibi surface: at the time of the murder, he was a patient at a hospital, suffering from severe intoxication, and there were voluminous records to prove it.  He was freed after five months, and prosecutors now think the most likely explanation is that paramedics who transported the man to the hospital were the same ones called to the crime scene later that night.  The DNA was likely transferred to the victim from the paramedics’ clothing or equipment.  Professor Obasogie decries “the certainty with which prosecutors charged Mr. Anderson with murder,” because it  “highlights the very real injustices that can occur when we place too much faith in DNA forensic technologies.” This is hard to argue with; for those who can remember this far back, it recalls lawyer Barry Scheck cross-examining Dennis Fung in the O.J. Simpson murder trial over the sloppy collection of DNA and other evidence.  Scheck and other lawyers argued that the DNA that seemed to implicate Simpson just couldn’t be trusted; “garbage in, garbage out.” Professor Obasogie is not saying that the paramedics did anything wrong or were sloppy in any conventional sense; rather he’s arguing (correctly) that contamination can happen even when we’re not aware of the possibility.  Thus caution is always advisable, even with DNA.

But then Professor Obasogie begins to argue that there are deeper problems with DNA identification than just contamination.

Consider the frequent claim that it is highly unlikely, if not impossible, for two DNA profiles to match by coincidence. A 2005 audit of Arizona’s DNA database showed that, out of some 65,000 profiles, nearly 150 pairs matched at a level typically considered high enough to identify and prosecute suspects. Yet these profiles were clearly from different people.

Professor David H. Kaye of Penn State Dickinson School of Law has pointed out the problem with this argument on his blog Forensic Science, Statistics & the Law.  In a blog post on July 26, Professor Kaye explains why Professor Obasogie is wrong on this point.  In his words:

The 150 or so matches were, in fact, mismatches. That is, they were partial matches that actually excluded every “matching” pair. Only if an analyst improperly ignored the nonmatching parts of the profiles or if these did not appear in a crime-scene sample could they be reported to match.

There is much more to Professor Kaye’s thorough and lucid explanation.  If you are interested in a real understanding of how DNA actually works, I strongly recommend Professor Kaye’s post.

Advertisements

The U.S. Department of Justice (DOJ), in partnership with the Innocence Project and the National Association of Criminal Defense Lawyers (NACDL), will review 2,000 cases in which microscopic hair analysis of crime scene evidence was conducted by the FBI Laboratory.  The review, prompted by the DNA-based exoneration of several men convicted on the basis of hair microscopy, will focus on “specific cases in which FBI Laboratory reports and testimony included statements that were scientifically invalid.”  The Innocence Project’s announcement of the review is here; a representative news article is here.

In a move that shows just how seriously the DOJ is taking this review, it has done something unheard of:

Because of the importance attached to these cases, the DOJ has agreed, for the first time in its history, not to raise procedural objections, such as statute of limitations and procedural default claims, in response to the petitions of criminal defendants seeking to have their convictions overturned because of faulty FBI microscopic hair comparison laboratory reports and/or testimony.

Translation: DOJ is not going to fight this review in any of these cases; they’re going to be part of it.

It’s hard to describe the magnitude of the shift in outlook this represents.  Usually, as readers of Failed Evidence know, law enforcement (and I include DOJ in that phrase) resists science-based review and testing; that’s the thrust of the book.  I am happy to say that this is refreshingly different.  According to Peter Neufeld, Co-Director of the Innocence Project, “[t]he government’s willingness to admit error and accept its duty to correct those errors in an extraordinarily large number of cases is truly unprecedented.  It signals a new era in this country that values science and recognizes that truth and justice should triumph over procedural obstacles.”

Of course, this review will not affect cases in which hair analysis was handled by state crime labs.  But here’s hoping they will take this as an example, as the Grits for Breakfast blog argues ought to be done in Texas.

For a sense of the damage that sloppy hair analysis and testimony about it has done in prior cases, listen to this NPR story and interview about the case of Dennis Fritz, in Ada, Oklahoma.  John Grisham’s nonfiction book “The Innocent Man” is an excellent read about the case.

Maybe this is the beginning of a trend.  Hats off to DOJ, the Innocence Project, and NACDL.

When a former high-ranking Justice Department official speaks of a “revolution” in criminal justice, with the whole field turning toward science, could it mean less failed evidence in the future?  What does it mean for those concerned with faulty forensic science?

Laurie Robinson served as Assistant Attorney General in both the Clinton and Obama Justice Departments, where she oversaw the Office of Justice Programs (OJP), the research, statistics and criminal justice assistance arm Justice.  That made her remarks to the Delaware Center for Justice the other day worth noticing.  According to the Wilmington News Journal, “[w]e’re seeing something akin to a revolution in criminal justice in this country,” said Robinson, now on the faculty of George Mason University. “We’re at an important crossroads, one where ideology has taken a back seat, and science and pragmatism have come to the fore.”

Robinson’s web page at George Mason says her tenure at OJP  “was marked by a focus on science and evidence-based programming.”  She was in Delaware to discuss the state’s re-entry programs  and other initiatives to reduce recidivism, and few would disagree that those important programs need scientific and statistical support.   But I wonder whether Robinson would be as optimistic about science’s role in forensic methods, which have played a role in about half of all wrongful convictions across the U.S.  Surely, forensic science needs “science and evidence-based” support and examination — badly.

 It has now been more than four years since the release of the landmark 2009 National Academy of Sciences report Strengthening Forensic Sciences in the United States: A Path Forward, which found that except for DNA and chemical analysis, most of what we think of as forensic science isn’t science at all.  In that time, little seems to have changed; scandals in crime labs continue to pile up in jurisdictions across the country (see my posts about lab scandals just this past year in Massachusetts  and Minnesota,  for example).  In a particularly compelling piece of writing in the Huffington Post, Radley Balko discusses the long-running crime lab scandal in Mississippi, and puts it in context: Mississippi’s scandal “is just the latest in a long, sad line of such stories” that Balko has already chronicled.

What’s to be done?  As a start, I argued in chapter 7 of  Failed Evidence, all federal money that goes to law enforcement should carry with it a requirement for compliance with best practices in police investigation and forensic science.  Not all law enforcement agencies run their own forensic labs (and as the NAS report said in Chapter 6, “Improving Methods, Practice, and Performance in Forensic Science,” labs should be independent of law enforcement.).  But for those that do, compliance with standards that would avoid systematic error, human biases, fraud, and improper scientific testing and testimony should be mandatory.

That would start a revolution right there.  Because what law enforcement agency could afford to just turn down federal funding, in budgetary times like these?

 

Of all of the methods studied in the 2009 National Academy of Sciences’ report on forensic science, forensic odontology — the analysis of bite marks to identify the perpetrators of crimes — has come under some of the harshest criticism.  And now an effort is underway to keep it out of courtrooms for good.

This article by the Associated Press examined “decades of court records, archives, news reports and filings by the Innocence Project in order to compile the most comprehensive count to date of those exonerated after being convicted or charged based on bite mark evidence.”   Typically, the evidence consisted of testimony by forensic dentists identifying marks on the bodies of victims (usually in cases of murder or sexual assault) as coming from the defendant’s teeth.  According to the AP, since 2000, DNA identification has overturned one after another of these cases, throwing the entire “discipline” into question, and rendering it nearly obsolete.

Critics make two main claims against forensic odontology.  First, there’s no scientific proof that a bite mark can be matched to any particular set of teeth.  There is no data, no experimental evidence — nothing — on which to base the idea that forensic odontology can make reliable identifications.  Second, human skin — almost always the site of the bite mark in question — does not reliably “record” bite marks, since the skin itself changes over time, even after death.  Because the skin changes shape, consistency, color, and even size  after the mark is made, this makes bite marks, and the method itself, inherently unstable.

Yet the dentists, who belong to the American Board of Forensic Odontology, insist that their methods of making identifications of the source of bite marks are reliable; it’s just that some forensic dentists are, well, not too good or biased.  “The problem lies in the analyst or the bias,” said Dr. Frank Wright, a forensic dentist in Cincinnati. “So if the analyst is … not properly trained or introduces bias into their exam, sure, it’s going to be polluted, just like any other scientific investigation. It doesn’t mean bite mark evidence is bad.”  It’s the familiar refrain: it’s just a few bad apples, not the whole barrel.  But according to the AP:

Only about 100 forensic dentists are certified by the odontology board, and just a fraction are actively analyzing and comparing bite marks. Certification requires no proficiency tests. The board requires a dentist to have been the lead investigator and to have testified in one current bite mark case and to analyze six past cases on file — a system criticized by defense attorneys because it requires testimony before certification…The consequences for being wrong are almost nonexistent. Many lawsuits against forensic dentists employed by counties and medical examiner’s offices have been thrown out because as government officials, they’re largely immune from liability.  Only one member of the American Board of Forensic Odontology has ever been suspended, none has ever been decertified, and some dentists still on the board have been involved in some of the most high-profile and egregious exonerations on record.

It’s worth noting that there’s nothing wrong with other uses of forensic dentistry, such as identifying human remains from teeth by matching them to existing dental records; that type of forensic work is generally rock solid and remains unchallenged by critics.  But on identification of perpetrators through bite mark analysis, the question is different: when a forensic “science” has as dismal a record as forensic odontology does, and no scientific proof of its validity exists, can anything justify allowing the use of bite mark evidence to convict a person of a crime?

When I’m in Cincinnati for talks on Failed Evidence tonight, April 4, at 7:00 pm at the Clifton Cultural Arts Center and tomorrow, April 5, at noon at the University of Cincinnati School of Law, one topic sure to come up is an article from the April 3 New York Times, “Advances in Science of Fire Free a Convict After 42 Years. ”   Louis Taylor was serving 28 life sentences for the deaths caused by a fire in a Tucson, Arizona hotel in December of 1970.  Taylor, then just 16 years old, was convicted of arson based on faulty forensic science.

Mr. Taylor has been release from prison.  He is now 58 years old.

The story highlights the state of arson investigation, past and present.

A few years ago, the National Academy of Sciences turned its attention to the misuse of science in courtrooms, saying that pseudoscientific theories had been used to convict people of crimes they may not have committed. By then, a small group of fire engineers had already begun to discredit many of the assumptions employed in fire investigations, like the practice of using the amount of heat radiated by a fire to assess if an accelerant had been used.

Unlike DNA evidence, which can exonerate one person and sometimes incriminate another, the evidence collected in some arson investigations does not yield precise results. Often much of the evidence has been lost or destroyed. In the case of the hotel fire here, all that is left are photographs, reports and chemical analysis, all of them assembled to prove arson.

As a result, “we can’t definitely say what really caused the fire,” said John J. Lentini, a veteran fire investigator who wrote a report on Mr. Taylor’s case. “But what we can do is discredit the evidence” used to support the charge.

The case recalls the story of the trial and execution of Cameron Todd Willingham in Texas, executed in 2004 for the deaths of his children in a fire.  Experts call the arson in that case terribly flawed — just as in Mr. Taylor’s case.

The science surrounding investigation is light years ahead of where it used to be, even a decade ago.  It’s time that all of the old cases in which verdicts depended on outmoded and discredited methods of arson investigation be re-examined, and if necessary overturned.

A story on National Public Radio highlights one of the central themes of”Failed Evidence: how does the criminal justice system react to advances in science that throw past convictions into doubt?  The answer will not surprise readers of this blog or Failed Evidence: they resist.

The story concerns the case of William Richards, convicted in 1997 of murdering his wife.  The conviction came in a fourth trial, after two hung juries and a mistrial failed to result in a verdict.  In the fourth trial, the prosecution introduced new evidence: testimony by a forensic dentist, who said that marks seen in a photograph of the victim’s body were human bite marks .  The marks, he said, were unusual enough that they were likely to have been made by the defendant’s distinctive teeth.   Ten years later, another forensic dentist corrected a distortion in the photo of the marks, using photo editing software.  Now, the first forensic dentist says the marks weren’t from human teeth at all, and he says that he should not have testified as he did.  Yet the defendant remains in prison, serving 25 years to life.

There’s a lot that is familiar here — too much.

First, the idea that bite mark identification should ever play a role — let alone the key role — in convicting someone and sending them to prison is just intolerable.  I have posted about the weakness of bite mark analysis before (here), and Richards’ case demonstrates the point all over again.  The forensic dentist who put Richards in prison testified that the bite mark (that turned out not to be a bite mark) was so distinctive that he estimated that only one or two out of a hundred people could have made it.  The idea that such an estimate — not at data-based study, but his personal estimate — of such a low probability could ever be considered for admission in a court should make everyone shiver.

Second, the story gives us the reaction of Jan Scully, past president of the National District Attorneys Association and the elected District Attorney of Sacramento County, California.  Scully says there is something more important than the fact that the central evidence in the case has been fatally undermined.  According to the NPR story:

“We need to have finality of verdicts,” she says. “There is always a new opinion or there might be a refinement in our forensic science areas. So, just because something new occurs doesn’t mean that the original conviction somehow was not valid.”

In other words, it’s the same old story from the NDAA: there is no significance to the demonstrated falsity of the “science” that was used to put a man in prison.  It is more important for the verdict to remain undisturbed.

It’s hard to imagine a story that captures the ideas in Failed Evidence more strongly.  Go to the story, and check it out.

I’ve written frequently about forensic methods that cannot claim to have any scientific validity, yet end up admitted in court and convicting innocent people.  Among these faulty forensics, bite mark analysis probably ranks highest.

But who would have thought that today, we’d have occasion to think about the role played by a forensic method that is almost never been allowed in court as evidence?

It was nearly a century ago that the forerunner of today’s polygraph began to be brought to courts around the U.S. as a fool-proof, scientific method for detecting lies.  Of course, these devices did not do any such thing, and in 1923 the U.S. Court of Appeals for the D.C. Circuit decided the landmark Frye case, keeping the “systolic blood pressure deception test” out of court.

Nevertheless, the polygraph is widely used — by law enforcement investigators at both the state and federal level, and by private parties too.  The fact that criminal courts in the U.S. only rarely accept the results of these tests (New Mexico is the only state that sometimes allows them to be used in criminal cases) seems almost beside the point; the question “if you’re telling the truth, will you take a polygraph?” is frequently relied upon to separate the liars from the truth tellers in the course of investigations.  This seems to be a questionable way of finding the facts, if courts do not accept the result.  But there it is.

But the Chicago Police Department seems to have taken this dubious practice to a new level.  According to an article in the Chicago Tribune, the Chicago P.D.  “used their polygraph unit as a tool to obtain false confessions” by not following national standards and even by lying about the results they obtained.

At least five defendants — four of whom were charged with murder — have been cleared since 2002. In a sixth case, a federal appeals court threw out a murder conviction, leading to the release last month of a Chicago mother prosecuted in the death of her 4-year-old son.  A Tribune investigation found that Chicago police have long ignored voluntary standards for conducting polygraph exams, even as those methods and the examiners themselves have factored into cases costing the city millions of dollars in damages.

Read the article for yourself.  I wouldn’t have thought that in 2013,  polygraphs would feature among the causes of wrongful convictions.  But apparently, at least in Chicago, I was wrong.